Add Batch 2b372f56-6763-4ffd-b423-a611d7410d1a data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +63 -0
- 2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/3a7fb4bc-7ab2-48d7-8fe1-b57998e3e88f_content_list.json +0 -0
- 2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/3a7fb4bc-7ab2-48d7-8fe1-b57998e3e88f_model.json +0 -0
- 2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/3a7fb4bc-7ab2-48d7-8fe1-b57998e3e88f_origin.pdf +3 -0
- 2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/full.md +445 -0
- 2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/images.zip +3 -0
- 2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/layout.json +0 -0
- 2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/4a7b8b8a-ac90-4b97-8066-ba54fca5fe43_content_list.json +2363 -0
- 2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/4a7b8b8a-ac90-4b97-8066-ba54fca5fe43_model.json +0 -0
- 2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/4a7b8b8a-ac90-4b97-8066-ba54fca5fe43_origin.pdf +3 -0
- 2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/full.md +397 -0
- 2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/images.zip +3 -0
- 2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/layout.json +0 -0
- 2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/26c880de-fdc1-4e48-8796-76b92fa174c4_content_list.json +0 -0
- 2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/26c880de-fdc1-4e48-8796-76b92fa174c4_model.json +0 -0
- 2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/26c880de-fdc1-4e48-8796-76b92fa174c4_origin.pdf +3 -0
- 2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/full.md +391 -0
- 2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/images.zip +3 -0
- 2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/layout.json +0 -0
- 2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/c2b049d1-b436-4236-92b0-5c5a9fd00514_content_list.json +0 -0
- 2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/c2b049d1-b436-4236-92b0-5c5a9fd00514_model.json +0 -0
- 2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/c2b049d1-b436-4236-92b0-5c5a9fd00514_origin.pdf +3 -0
- 2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/full.md +535 -0
- 2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/images.zip +3 -0
- 2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/layout.json +0 -0
- 2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/ef465396-e2c2-470e-b46d-fc568784c577_content_list.json +0 -0
- 2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/ef465396-e2c2-470e-b46d-fc568784c577_model.json +0 -0
- 2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/ef465396-e2c2-470e-b46d-fc568784c577_origin.pdf +3 -0
- 2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/full.md +0 -0
- 2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/images.zip +3 -0
- 2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/layout.json +0 -0
- 2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/17c242f9-b044-448b-b875-ed886c7a216b_content_list.json +0 -0
- 2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/17c242f9-b044-448b-b875-ed886c7a216b_model.json +0 -0
- 2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/17c242f9-b044-448b-b875-ed886c7a216b_origin.pdf +3 -0
- 2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/full.md +569 -0
- 2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/images.zip +3 -0
- 2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/layout.json +0 -0
- 2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/93d9876c-9d63-47e7-b619-7a2efb148ce1_content_list.json +0 -0
- 2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/93d9876c-9d63-47e7-b619-7a2efb148ce1_model.json +0 -0
- 2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/93d9876c-9d63-47e7-b619-7a2efb148ce1_origin.pdf +3 -0
- 2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/full.md +506 -0
- 2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/images.zip +3 -0
- 2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/layout.json +0 -0
- 2024/Text Embedding Inversion Security for Multilingual Language Models/d5a1911b-6437-4a76-99a3-4662bc3b1fe0_content_list.json +0 -0
- 2024/Text Embedding Inversion Security for Multilingual Language Models/d5a1911b-6437-4a76-99a3-4662bc3b1fe0_model.json +0 -0
- 2024/Text Embedding Inversion Security for Multilingual Language Models/d5a1911b-6437-4a76-99a3-4662bc3b1fe0_origin.pdf +3 -0
- 2024/Text Embedding Inversion Security for Multilingual Language Models/full.md +0 -0
- 2024/Text Embedding Inversion Security for Multilingual Language Models/images.zip +3 -0
- 2024/Text Embedding Inversion Security for Multilingual Language Models/layout.json +0 -0
- 2024/Text-like Encoding of Collaborative Information in Large Language Models for Recommendation/7a9b4ddb-f1a8-4364-9a24-e043e4b29411_content_list.json +1739 -0
.gitattributes
CHANGED
|
@@ -4799,3 +4799,66 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 4799 |
2024/TAMS_[[:space:]]Translation-Assisted[[:space:]]Morphological[[:space:]]Segmentation/9226f2fa-be77-4033-9956-96b4289b5afd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4800 |
2024/TTM-RE_[[:space:]]Memory-Augmented[[:space:]]Document-Level[[:space:]]Relation[[:space:]]Extraction/243f8b7d-6566-4acd-949b-acfeec8313f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4801 |
2024/TaPERA_[[:space:]]Enhancing[[:space:]]Faithfulness[[:space:]]and[[:space:]]Interpretability[[:space:]]in[[:space:]]Long-Form[[:space:]]Table[[:space:]]QA[[:space:]]by[[:space:]]Content[[:space:]]Planning[[:space:]]and[[:space:]]Execution-based[[:space:]]Reasoning/3757a026-4581-4e37-bfc9-146515ae40ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4799 |
2024/TAMS_[[:space:]]Translation-Assisted[[:space:]]Morphological[[:space:]]Segmentation/9226f2fa-be77-4033-9956-96b4289b5afd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4800 |
2024/TTM-RE_[[:space:]]Memory-Augmented[[:space:]]Document-Level[[:space:]]Relation[[:space:]]Extraction/243f8b7d-6566-4acd-949b-acfeec8313f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4801 |
2024/TaPERA_[[:space:]]Enhancing[[:space:]]Faithfulness[[:space:]]and[[:space:]]Interpretability[[:space:]]in[[:space:]]Long-Form[[:space:]]Table[[:space:]]QA[[:space:]]by[[:space:]]Content[[:space:]]Planning[[:space:]]and[[:space:]]Execution-based[[:space:]]Reasoning/3757a026-4581-4e37-bfc9-146515ae40ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4802 |
+
2024/TaSL_[[:space:]]Continual[[:space:]]Dialog[[:space:]]State[[:space:]]Tracking[[:space:]]via[[:space:]]Task[[:space:]]Skill[[:space:]]Localization[[:space:]]and[[:space:]]Consolidation/3a7fb4bc-7ab2-48d7-8fe1-b57998e3e88f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4803 |
+
2024/Talk[[:space:]]With[[:space:]]Human-like[[:space:]]Agents_[[:space:]]Empathetic[[:space:]]Dialogue[[:space:]]Through[[:space:]]Perceptible[[:space:]]Acoustic[[:space:]]Reception[[:space:]]and[[:space:]]Reaction/4a7b8b8a-ac90-4b97-8066-ba54fca5fe43_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4804 |
+
2024/TasTe_[[:space:]]Teaching[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]to[[:space:]]Translate[[:space:]]through[[:space:]]Self-Reflection/26c880de-fdc1-4e48-8796-76b92fa174c4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4805 |
+
2024/TaxoLLaMA_[[:space:]]WordNet-based[[:space:]]Model[[:space:]]for[[:space:]]Solving[[:space:]]Multiple[[:space:]]Lexical[[:space:]]Semantic[[:space:]]Tasks/c2b049d1-b436-4236-92b0-5c5a9fd00514_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4806 |
+
2024/Tell[[:space:]]Me[[:space:]]More![[:space:]]Towards[[:space:]]Implicit[[:space:]]User[[:space:]]Intention[[:space:]]Understanding[[:space:]]of[[:space:]]Language[[:space:]]Model[[:space:]]Driven[[:space:]]Agents/ef465396-e2c2-470e-b46d-fc568784c577_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4807 |
+
2024/Temperature-scaling[[:space:]]surprisal[[:space:]]estimates[[:space:]]improve[[:space:]]fit[[:space:]]to[[:space:]]human[[:space:]]reading[[:space:]]times[[:space:]]–[[:space:]]but[[:space:]]does[[:space:]]it[[:space:]]do[[:space:]]so[[:space:]]for[[:space:]]the[[:space:]]“right[[:space:]]reasons”_/17c242f9-b044-448b-b875-ed886c7a216b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4808 |
+
2024/Temporal[[:space:]]Knowledge[[:space:]]Question[[:space:]]Answering[[:space:]]via[[:space:]]Abstract[[:space:]]Reasoning[[:space:]]Induction/93d9876c-9d63-47e7-b619-7a2efb148ce1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4809 |
+
2024/Text[[:space:]]Embedding[[:space:]]Inversion[[:space:]]Security[[:space:]]for[[:space:]]Multilingual[[:space:]]Language[[:space:]]Models/d5a1911b-6437-4a76-99a3-4662bc3b1fe0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4810 |
+
2024/Text-like[[:space:]]Encoding[[:space:]]of[[:space:]]Collaborative[[:space:]]Information[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Recommendation/7a9b4ddb-f1a8-4364-9a24-e043e4b29411_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4811 |
+
2024/Text-to-Song_[[:space:]]Towards[[:space:]]Controllable[[:space:]]Music[[:space:]]Generation[[:space:]]Incorporating[[:space:]]Vocal[[:space:]]and[[:space:]]Accompaniment/63f33ffd-d882-4fee-bd9a-7c7e56c6b9fb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4812 |
+
2024/The[[:space:]]Belebele[[:space:]]Benchmark_[[:space:]]a[[:space:]]Parallel[[:space:]]Reading[[:space:]]Comprehension[[:space:]]Dataset[[:space:]]in[[:space:]]122[[:space:]]Language[[:space:]]Variants/49ef3705-f498-417e-a7c3-647c3db6ccb6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4813 |
+
2024/The[[:space:]]Dawn[[:space:]]After[[:space:]]the[[:space:]]Dark_[[:space:]]An[[:space:]]Empirical[[:space:]]Study[[:space:]]on[[:space:]]Factuality[[:space:]]Hallucination[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/62bc92c0-33de-4627-a3a7-9301148828f8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4814 |
+
2024/The[[:space:]]Earth[[:space:]]is[[:space:]]Flat[[:space:]]because..._[[:space:]]Investigating[[:space:]]LLMs’[[:space:]]Belief[[:space:]]towards[[:space:]]Misinformation[[:space:]]via[[:space:]]Persuasive[[:space:]]Conversation/8a72206d-9a2d-41c5-957f-47a24030db74_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4815 |
+
2024/The[[:space:]]Echoes[[:space:]]of[[:space:]]Multilinguality_[[:space:]]Tracing[[:space:]]Cultural[[:space:]]Value[[:space:]]Shifts[[:space:]]during[[:space:]]Language[[:space:]]Model[[:space:]]Fine-tuning/1efe0d72-72fb-445c-8b37-4b72b8e1faa0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4816 |
+
2024/The[[:space:]]Fine-Tuning[[:space:]]Paradox_[[:space:]]Boosting[[:space:]]Translation[[:space:]]Quality[[:space:]]Without[[:space:]]Sacrificing[[:space:]]LLM[[:space:]]Abilities/fb620f2f-7891-4190-bcdd-bc27f884d5a2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4817 |
+
2024/The[[:space:]]Heuristic[[:space:]]Core_[[:space:]]Understanding[[:space:]]Subnetwork[[:space:]]Generalization[[:space:]]in[[:space:]]Pretrained[[:space:]]Language[[:space:]]Models/2909a0f7-0ea9-4b47-8b98-7a033bc9509f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4818 |
+
2024/The[[:space:]]Hidden[[:space:]]Space[[:space:]]of[[:space:]]Transformer[[:space:]]Language[[:space:]]Adapters/07347b75-5f78-4c77-82a4-04091f09f9e5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4819 |
+
2024/The[[:space:]]MERSA[[:space:]]Dataset[[:space:]]and[[:space:]]a[[:space:]]Transformer-Based[[:space:]]Approach[[:space:]]for[[:space:]]Speech[[:space:]]Emotion[[:space:]]Recognition/e022e41f-efd8-417c-a189-d8263c7ff3ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4820 |
+
2024/The[[:space:]]Unreasonable[[:space:]]Effectiveness[[:space:]]of[[:space:]]Easy[[:space:]]Training[[:space:]]Data[[:space:]]for[[:space:]]Hard[[:space:]]Tasks/9837a368-e0a6-482c-9917-21198b1a51fb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4821 |
+
2024/Think[[:space:]]Twice_[[:space:]]Perspective-Taking[[:space:]]Improves[[:space:]]Large[[:space:]]Language[[:space:]]Models’[[:space:]]Theory-of-Mind[[:space:]]Capabilities/8e266c19-07f4-4a31-b155-385184941792_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4822 |
+
2024/Threads[[:space:]]of[[:space:]]Subtlety_[[:space:]]Detecting[[:space:]]Machine-Generated[[:space:]]Texts[[:space:]]Through[[:space:]]Discourse[[:space:]]Motifs/32390011-b6a4-4ab3-aafd-a7f3c9b9aa75_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4823 |
+
2024/Through[[:space:]]the[[:space:]]Lens[[:space:]]of[[:space:]]Split[[:space:]]Vote_[[:space:]]Exploring[[:space:]]Disagreement,[[:space:]]Difficulty[[:space:]]and[[:space:]]Calibration[[:space:]]in[[:space:]]Legal[[:space:]]Case[[:space:]]Outcome[[:space:]]Classification/ba899e23-807c-42dd-8119-ca58c9b3fba6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4824 |
+
2024/Through[[:space:]]the[[:space:]]MUD_[[:space:]]A[[:space:]]Multi-Defendant[[:space:]]Charge[[:space:]]Prediction[[:space:]]Benchmark[[:space:]]with[[:space:]]Linked[[:space:]]Crime[[:space:]]Elements/1888f01f-c768-4356-bd68-09a6b38d5fcd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4825 |
+
2024/Time[[:space:]]is[[:space:]]Encoded[[:space:]]in[[:space:]]the[[:space:]]Weights[[:space:]]of[[:space:]]Finetuned[[:space:]]Language[[:space:]]Models/df4ffc5e-c3d5-4cd1-b773-2a5af2da4217_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4826 |
+
2024/TimeArena_[[:space:]]Shaping[[:space:]]Efficient[[:space:]]Multitasking[[:space:]]Language[[:space:]]Agents[[:space:]]in[[:space:]]a[[:space:]]Time-Aware[[:space:]]Simulation/ab040339-5cf1-4cd7-a8b3-06eefd0cbdbe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4827 |
+
2024/TimeBench_[[:space:]]A[[:space:]]Comprehensive[[:space:]]Evaluation[[:space:]]of[[:space:]]Temporal[[:space:]]Reasoning[[:space:]]Abilities[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/238ca1f8-9279-4ec5-9f4c-8e81c29cb593_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4828 |
+
2024/Timeline-based[[:space:]]Sentence[[:space:]]Decomposition[[:space:]]with[[:space:]]In[[:space:]]Context[[:space:]]Learning[[:space:]]for[[:space:]]Temporal[[:space:]]Fact[[:space:]]Extraction/c4e41865-64c6-4290-a588-cbe3f30e6706_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4829 |
+
2024/To[[:space:]]Distill[[:space:]]or[[:space:]]Not[[:space:]]to[[:space:]]Distill_[[:space:]]On[[:space:]]the[[:space:]]Robustness[[:space:]]of[[:space:]]Robust[[:space:]]Knowledge[[:space:]]Distillation/495e759f-8f50-4a6c-ac6c-3fb3d0170510_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4830 |
+
2024/To[[:space:]]Generate[[:space:]]or[[:space:]]to[[:space:]]Retrieve_[[:space:]]On[[:space:]]the[[:space:]]Effectiveness[[:space:]]of[[:space:]]Artificial[[:space:]]Contexts[[:space:]]for[[:space:]]Medical[[:space:]]Open-Domain[[:space:]]Question[[:space:]]Answering/507c2cac-875b-440e-b5d6-5a21c4ebb1e4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4831 |
+
2024/To[[:space:]]be[[:space:]]Continuous,[[:space:]]or[[:space:]]to[[:space:]]be[[:space:]]Discrete,[[:space:]]Those[[:space:]]are[[:space:]]Bits[[:space:]]of[[:space:]]Questions/ab1d343d-1605-49b1-88fa-a026b00295f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4832 |
+
2024/Token-wise[[:space:]]Influential[[:space:]]Training[[:space:]]Data[[:space:]]Retrieval[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models/3bb40750-3cdb-4c9f-80ea-a2835481a773_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4833 |
+
2024/ToolSword_[[:space:]]Unveiling[[:space:]]Safety[[:space:]]Issues[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Tool[[:space:]]Learning[[:space:]]Across[[:space:]]Three[[:space:]]Stages/2e44ac53-038b-485c-9546-e13e28950a3c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4834 |
+
2024/Toward[[:space:]]In-Context[[:space:]]Teaching_[[:space:]]Adapting[[:space:]]Examples[[:space:]]to[[:space:]]Students’[[:space:]]Misconceptions/3f33eb33-7922-496c-a893-4087f58398f0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4835 |
+
2024/Towards[[:space:]]Better[[:space:]]Understanding[[:space:]]of[[:space:]]Contrastive[[:space:]]Sentence[[:space:]]Representation[[:space:]]Learning_[[:space:]]A[[:space:]]Unified[[:space:]]Paradigm[[:space:]]for[[:space:]]Gradient/bfad7791-93b4-4ef1-bd8b-653007d380f3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4836 |
+
2024/Towards[[:space:]]Faithful[[:space:]]and[[:space:]]Robust[[:space:]]LLM[[:space:]]Specialists[[:space:]]for[[:space:]]Evidence-Based[[:space:]]Question-Answering/c4912d7d-2e70-4dd2-b668-0b209c3f0310_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4837 |
+
2024/Towards[[:space:]]Privacy-Aware[[:space:]]Sign[[:space:]]Language[[:space:]]Translation[[:space:]]at[[:space:]]Scale/b0f33399-a26f-448f-b61e-4269978971bb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4838 |
+
2024/Towards[[:space:]]Real-World[[:space:]]Writing[[:space:]]Assistance_[[:space:]]A[[:space:]]Chinese[[:space:]]Character[[:space:]]Checking[[:space:]]Benchmark[[:space:]]with[[:space:]]Faked[[:space:]]and[[:space:]]Misspelled[[:space:]]Characters/021b61cc-8b6e-4a29-b537-22991f3b0e0e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4839 |
+
2024/Towards[[:space:]]Real-world[[:space:]]Scenario_[[:space:]]Imbalanced[[:space:]]New[[:space:]]Intent[[:space:]]Discovery/9fc4cb72-312d-458c-91df-0aab8f1dee92_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4840 |
+
2024/Towards[[:space:]]Robust[[:space:]]and[[:space:]]Generalized[[:space:]]Parameter-Efficient[[:space:]]Fine-Tuning[[:space:]]for[[:space:]]Noisy[[:space:]]Label[[:space:]]Learning/ef2f91ae-d023-4676-a6a4-8493d7e49947_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4841 |
+
2024/Tracking[[:space:]]the[[:space:]]Newsworthiness[[:space:]]of[[:space:]]Public[[:space:]]Documents/1ee817fd-f3f5-4a5a-a6fa-17c4fe8a1988_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4842 |
+
2024/Training[[:space:]]Language[[:space:]]Models[[:space:]]to[[:space:]]Generate[[:space:]]Text[[:space:]]with[[:space:]]Citations[[:space:]]via[[:space:]]Fine-grained[[:space:]]Rewards/c11e2182-9225-4a09-aab7-8e7367cafd19_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4843 |
+
2024/Transferable[[:space:]]Embedding[[:space:]]Inversion[[:space:]]Attack_[[:space:]]Uncovering[[:space:]]Privacy[[:space:]]Risks[[:space:]]in[[:space:]]Text[[:space:]]Embeddings[[:space:]]without[[:space:]]Model[[:space:]]Queries/3579b15c-fae3-4d8b-84c4-a4881f541640_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4844 |
+
2024/Transferable[[:space:]]and[[:space:]]Efficient[[:space:]]Non-Factual[[:space:]]Content[[:space:]]Detection[[:space:]]via[[:space:]]Probe[[:space:]]Training[[:space:]]with[[:space:]]Offline[[:space:]]Consistency[[:space:]]Checking/68a768c1-e03d-47a7-a8bc-7c78b0095220_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4845 |
+
2024/Transitive[[:space:]]Consistency[[:space:]]Constrained[[:space:]]Learning[[:space:]]for[[:space:]]Entity-to-Entity[[:space:]]Stance[[:space:]]Detection/36032243-e40c-4f1c-a2a2-b37846177394_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4846 |
+
2024/Translation-based[[:space:]]Lexicalization[[:space:]]Generation[[:space:]]and[[:space:]]Lexical[[:space:]]Gap[[:space:]]Detection_[[:space:]]Application[[:space:]]to[[:space:]]Kinship[[:space:]]Terms/af057e5a-6422-4856-8c64-8ecdc012aa0b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4847 |
+
2024/TransliCo_[[:space:]]A[[:space:]]Contrastive[[:space:]]Learning[[:space:]]Framework[[:space:]]to[[:space:]]Address[[:space:]]the[[:space:]]Script[[:space:]]Barrier[[:space:]]in[[:space:]]Multilingual[[:space:]]Pretrained[[:space:]]Language[[:space:]]Models/69001d0a-16cf-49c2-9e07-b04c6cd9754e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4848 |
+
2024/Transparent[[:space:]]and[[:space:]]Scrutable[[:space:]]Recommendations[[:space:]]Using[[:space:]]Natural[[:space:]]Language[[:space:]]User[[:space:]]Profiles/197aedec-5ef5-44ac-a377-9ebb1e3b3ec5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4849 |
+
2024/Tree[[:space:]]Transformer’s[[:space:]]Disambiguation[[:space:]]Ability[[:space:]]of[[:space:]]Prepositional[[:space:]]Phrase[[:space:]]Attachment[[:space:]]and[[:space:]]Garden[[:space:]]Path[[:space:]]Effects/6fcb6817-e261-4514-b407-4e404a7ebc4f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4850 |
+
2024/Tree-Averaging[[:space:]]Algorithms[[:space:]]for[[:space:]]Ensemble-Based[[:space:]]Unsupervised[[:space:]]Discontinuous[[:space:]]Constituency[[:space:]]Parsing/f4c96365-3438-45a9-8855-aec5612a476d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4851 |
+
2024/Tree-of-Counterfactual[[:space:]]Prompting[[:space:]]for[[:space:]]Zero-Shot[[:space:]]Stance[[:space:]]Detection/d51df594-b761-4770-b56c-d9ff7d3659a5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4852 |
+
2024/Tree-of-Traversals_[[:space:]]A[[:space:]]Zero-Shot[[:space:]]Reasoning[[:space:]]Algorithm[[:space:]]for[[:space:]]Augmenting[[:space:]]Black-box[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Knowledge[[:space:]]Graphs/910758f8-420f-411e-b77a-f83c7d6eb05b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4853 |
+
2024/Trial[[:space:]]and[[:space:]]Error_[[:space:]]Exploration-Based[[:space:]]Trajectory[[:space:]]Optimization[[:space:]]of[[:space:]]LLM[[:space:]]Agents/cf395f35-4081-4918-a48a-2d51b195ea1f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4854 |
+
2024/Triple-Encoders_[[:space:]]Representations[[:space:]]That[[:space:]]Fire[[:space:]]Together,[[:space:]]Wire[[:space:]]Together/79b44f21-ae0d-474d-b41b-c60dc2aa334e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4855 |
+
2024/TruthX_[[:space:]]Alleviating[[:space:]]Hallucinations[[:space:]]by[[:space:]]Editing[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Truthful[[:space:]]Space/fbf3380f-35e7-4036-b9d4-88cb04575127_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4856 |
+
2024/Tuning[[:space:]]Large[[:space:]]Multimodal[[:space:]]Models[[:space:]]for[[:space:]]Videos[[:space:]]using[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]from[[:space:]]AI[[:space:]]Feedback/8ab30447-f473-417e-beb6-621ee067db4f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4857 |
+
2024/UHGEval_[[:space:]]Benchmarking[[:space:]]the[[:space:]]Hallucination[[:space:]]of[[:space:]]Chinese[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]via[[:space:]]Unconstrained[[:space:]]Generation/f82b16ce-dd52-44b4-bf06-47d3521085ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4858 |
+
2024/UNIMO-G_[[:space:]]Unified[[:space:]]Image[[:space:]]Generation[[:space:]]through[[:space:]]Multimodal[[:space:]]Conditional[[:space:]]Diffusion/a4ade08f-62bc-473f-a2b0-364313462422_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4859 |
+
2024/UltraLink_[[:space:]]An[[:space:]]Open-Source[[:space:]]Knowledge-Enhanced[[:space:]]Multilingual[[:space:]]Supervised[[:space:]]Fine-tuning[[:space:]]Dataset/f9284ffe-e6de-4e66-b23c-35657a366ef6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4860 |
+
2024/Uncertainty[[:space:]]Aware[[:space:]]Learning[[:space:]]for[[:space:]]Language[[:space:]]Model[[:space:]]Alignment/0ca1670c-3659-46f6-a61a-fcdd427edf19_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4861 |
+
2024/Uncertainty-Guided[[:space:]]Modal[[:space:]]Rebalance[[:space:]]for[[:space:]]Hateful[[:space:]]Memes[[:space:]]Detection/f4b794b9-00f2-433e-b825-9596247b052e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4862 |
+
2024/Uncovering[[:space:]]the[[:space:]]Full[[:space:]]Potential[[:space:]]of[[:space:]]Visual[[:space:]]Grounding[[:space:]]Methods[[:space:]]in[[:space:]]VQA/49ac657d-85be-41b1-8b8c-8df6eef82006_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4863 |
+
2024/Understanding[[:space:]]Retrieval[[:space:]]Robustness[[:space:]]for[[:space:]]Retrieval-augmented[[:space:]]Image[[:space:]]Captioning/9b7473ff-f22a-4c66-b360-2e80dd7b2e03_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4864 |
+
2024/Understanding[[:space:]]and[[:space:]]Addressing[[:space:]]the[[:space:]]Under-Translation[[:space:]]Problem[[:space:]]from[[:space:]]the[[:space:]]Perspective[[:space:]]of[[:space:]]Decoding[[:space:]]Objective/1075b938-2636-4bce-8a2d-1a489fc900a0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/3a7fb4bc-7ab2-48d7-8fe1-b57998e3e88f_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/3a7fb4bc-7ab2-48d7-8fe1-b57998e3e88f_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/3a7fb4bc-7ab2-48d7-8fe1-b57998e3e88f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:371cd023e77a962b2da86c760bb48b75ef338eb2ed15e9c77a96e8e2eef41a4f
|
| 3 |
+
size 4036759
|
2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/full.md
ADDED
|
@@ -0,0 +1,445 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TaSL: Continual Dialog State Tracking via Task Skill Localization and Consolidation
|
| 2 |
+
|
| 3 |
+
Yujie Feng $^{1}$ , Xu Chu $^{2,3,4}$ , Yongxin Xu $^{2,3}$ , Guangyuan Shi $^{1}$ , Bo Liu $^{1}$ , Xiao-Ming Wu $^{1*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Department of Computing, The Hong Kong Polytechnic University, Hong Kong S.A.R.
|
| 6 |
+
|
| 7 |
+
$^{2}$ School of Computer Science, Peking University, Beijing, China
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China
|
| 10 |
+
|
| 11 |
+
$^{4}$ Center on Frontiers of Computing Studies, Peking University, Beijing, China
|
| 12 |
+
|
| 13 |
+
yujie.feng@connect.polyu.hk, xiao-ming.wu@polyu.edu.hk
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
A practical dialogue system requires the capacity for ongoing skill acquisition and adaptability to new tasks while preserving prior knowledge. However, current methods for Continual Dialogue State Tracking (DST), a crucial function of dialogue systems, struggle with the catastrophic forgetting issue and knowledge transfer between tasks. We present TaSL, a novel framework for task skill localization and consolidation that enables effective knowledge transfer without relying on memory replay. TaSL uses a novel group-wise technique to pinpoint task-specific and task-shared areas. Additionally, a fine-grained skill consolidation strategy protects task-specific knowledge from being forgotten while updating shared knowledge for bi-directional knowledge transfer. As a result, TaSL strikes a balance between preserving previous knowledge and excelling at new tasks. Comprehensive experiments on various backbones highlight the significant performance improvements of TaSL over existing state-of-the-art methods. The source code<sup>1</sup> is provided for reproducibility.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
With the rising popularity of conversational digital assistants, it is imperative for dialogue systems to integrate new services while sustaining proficiency in prior tasks seamlessly. Traditional research, often conducted within specific domains offline, falls short in adaptability to new scenarios (Ni et al., 2023). Retraining pre-trained language models (PLMs) from scratch is both challenging and resource-intensive (Liu et al., 2023), highlighting the necessity for efficient continual learning (CL) approaches in dialogue systems (Ke and Liu, 2022). Dialogue state tracking (DST), crucial for task-oriented dialogue systems, dynamically updates
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Conceptual illustration of TaSL. By identifying task-relevant areas across both previously accumulated and current tasks, we can consolidate the task-specific and task-shared parameters to facilitate efficient knowledge transfer and mitigate forgetting.
|
| 25 |
+
|
| 26 |
+
domain, slot, value) triplets to capture user intentions precisely. The urgent demand for advancing DST models to accommodate emerging services has catalyzed the development of the Continual DST task (Cho et al., 2023).
|
| 27 |
+
|
| 28 |
+
An effective Continual DST system must address the issue of catastrophic forgetting (McCloskey and Cohen, 1989), where a model's proficiency in old tasks diminishes after learning new ones. It should also promote knowledge transfer (KT) (Ke et al., 2021) across domains<sup>2</sup> to enhance end-task performances. Knowledge transfer includes forward transfer, which improves new task performance using knowledge from previous tasks, and backward transfer, which enhances performance on previous tasks after learning a new relevant task. Striking a balance between retaining previous knowledge and excelling in new tasks is vital for success.
|
| 29 |
+
|
| 30 |
+
However, current Continual DST methods (Madotto et al., 2020; Liu et al., 2021; Cho et al., 2023; Feng et al., 2024) mainly focus on mitigating forgetting through memory replay or regularization, overlooking the advantages of KT that can be derived from the inherent correlations between different DST domains.
|
| 31 |
+
|
| 32 |
+
Task correlation in Continual DST is quite ev
|
| 33 |
+
|
| 34 |
+
ident. For instance, domains like "Hotel" and "Restaurant" share semantically similar slots, such as "area" and "bookday", highlighting the need for models to identify and handle common information types. The similarity in these domain-shared slots is crucial for enabling KT. However, learning domain-specific slots like "food" for "Restaurant" could introduce unique information that disrupts the retention of previously acquired knowledge, leading to catastrophic forgetting.
|
| 35 |
+
|
| 36 |
+
To address these challenges, we introduce Task Skill Localization and Consolidation (TaSL), a framework designed to improve KT between tasks without relying on memory replay. This is achieved by identifying and consolidating the importance distribution of model parameters across tasks. TaSL initially employs a group-wise importance-aware skill localization technique that utilizes gradient trajectories to pinpoint tiny regions in the model that store crucial knowledge for the current task. By comparing the importance distribution with those of previous tasks, we can differentiate between task-specific and task-shared parameters, as illustrated in Figure 1. Our innovative skill consolidation phase then categorically integrates weights from previous tasks with the current one, enabling effective KT while minimizing forgetting.
|
| 37 |
+
|
| 38 |
+
In detail, the importance-aware skill localization method employs a new group-wise metric to compute importance scores, effectively quantifying the significance of each "skill unit" for the current task. Our approach, focusing on parameter space rather than dataset-driven categorization of domain-shared and domain-specific slots, offers a more robust solution that accurately identifies task-specific and task-shared knowledge, overcoming inaccuracies caused by dataset noise.
|
| 39 |
+
|
| 40 |
+
Our skill consolidation stage, then based on a fine-grained model averaging strategy, effectively manages different types of knowledge. We enable forward KT to new tasks by starting with a model initialized with weights from previously fine-tuned tasks, thus using past knowledge to improve learning for new tasks without restrictions. For backward KT, we merge knowledge from both current and past accumulated tasks into localized task-shared skill units, thereby enhancing their capability. To prevent catastrophic forgetting, we
|
| 41 |
+
|
| 42 |
+
consolidate the integrity of skill units containing previous task-specific knowledge, ensuring they remain unaffected by new task learning. Through extensive experiments on different parameter-level backbones (from 60M to 7B), TaSL exhibits superior performance in mitigating forgetting and showcases remarkable capabilities for KT, significantly outperforming state-of-the-art methods.
|
| 43 |
+
|
| 44 |
+
Our main contributions include:
|
| 45 |
+
|
| 46 |
+
- We propose a novel task skill localization and consolidation (TaSL) framework for CL.
|
| 47 |
+
- We develop new group-wise skill localization and fine-grained skill consolidation techniques.
|
| 48 |
+
- Extensive evaluation on continual DST tasks shows TaSL effectively enables knowledge transfer, resulting in a $3.1\%$ absolute increase in Avg. JGA and an $8.8\%$ absolute boost in BWT metrics compared to previous SOTA methods.
|
| 49 |
+
|
| 50 |
+
# 2 Related Work
|
| 51 |
+
|
| 52 |
+
# 2.1 Continual Dialogue State Tracking
|
| 53 |
+
|
| 54 |
+
Continual Learning (CL) in task-oriented dialogue systems focuses on perpetually integrating knowledge from data streams for future application. Three kinds of CL methods have been developed. Architecture-based methods propose dynamically adding model weights when learning new data (Geng et al., 2021; Lu et al., 2021b; Yang et al., 2023). Replay-based methods store and replay some training samples from previous tasks (Hou et al., 2019; Lu et al., 2021a; Xu et al., 2023a). Regularization-based methods employ additional loss functions to solidify new knowledge (Li and Hoiem, 2017; Xu et al., 2023b).
|
| 55 |
+
|
| 56 |
+
In the realm of Continual DST, pioneering efforts by Madotto et al. (2020) and Liu et al. (2021) have leveraged these CL strategies to set benchmark performance using PLMs. The DST-EGQA approach by Cho et al. (2023) reformulates the DST task to an example-guided question-answering task, aiming to align distribution shifts across different domains to mitigate forgetting. However, these methods overlook DST task correlations that could enhance knowledge transfer. The recent Continual Prompt Tuning (CPT) method by Zhu et al. (2022) attempts knowledge transfer via domain-specific soft prompts but depends on inefficient memory replay and extensive retraining. This dataset-driven approach is inefficient and lacks robustness.
|
| 57 |
+
|
| 58 |
+
Our TaSL innovates by distinguishing between domain-specific and domain-shared knowledge
|
| 59 |
+
|
| 60 |
+
within the parameter space, then leveraging the skill consolidation process for effective knowledge transfer and forgetting mitigation.
|
| 61 |
+
|
| 62 |
+
# 2.2 Task Skill Localization
|
| 63 |
+
|
| 64 |
+
Research indicates that model parameters contribute unevenly to performance (Michel et al., 2019). Panigrahi et al. (2023) introduced the concept of "skill localization" to identify crucial parameters within PLMs, suggesting that fine-tuning critical parameters nearly matches the effect of full fine-tuning. However, their method requires additional time for identifying and retraining key parameters post-fine-tuning, lowering efficiency.
|
| 65 |
+
|
| 66 |
+
Drawing inspiration from the pruning community, previous studies have used gradient-based metrics to identify important parameters during fine-tuning. Sensitivity-based scoring (Sanh et al., 2020; Zhang et al., 2022b) assesses the impact on training loss and sensitivity smoothing, as applied by Zhang et al. (2022a), eliminates unnecessary parameters for more efficient fine-tuning. However, these approaches, focusing on individual parameter importance, often lead to element-wise pruning with huge computational and storage burdens (Feng et al., 2023a). Based on these advances, we introduce a new importance-aware skill localization method, for the first time, that distinguishes between task-specific and shared parameters to mitigate forgetting in CL.
|
| 67 |
+
|
| 68 |
+
# 3 Proposed Method: TaSL
|
| 69 |
+
|
| 70 |
+
Problem Formulation In continual DST, we aim to train a model $f: \mathcal{X} \times \mathcal{T} \to \mathcal{Y}$ across a sequence of dialogue domains $\mathcal{T}_1, \ldots, \mathcal{T}_K$ . Each dialogue domain has its dataset $\mathcal{D}_k$ for $\mathcal{T}_k$ . This model predicts the target $y$ based on input $x$ and task $\mathcal{T}_k \in \mathcal{T}$ . The notation $f_k$ refers to the model after training on task $\mathcal{T}_k$ , while $\hat{f}_k$ denotes the model after averaging for $\hat{f}_{k-1}$ and $f_k$ . Within a given task $\mathcal{T}_k$ , a dialogue with $M$ turns of interaction between the system and the user is denoted as $\mathcal{X}_M = \{(A_1, U_1), (A_2, U_2), \ldots, (A_M, U_M)\}$ , where $A$ and $U$ represent the system's response and the user's input, respectively.
|
| 71 |
+
|
| 72 |
+
Each task $\mathcal{T}_k$ is associated with a predefined slot set $S = \{S_1,\dots ,S_J\}$ , where $J$ is the total number of slots. The objective of DST is
|
| 73 |
+
|
| 74 |
+
to predict the dialogue state $\mathcal{B}_m$ based on the dialogue context $\mathcal{X}_m$ . The dialogue state is a collection of (slot, value) pairs, expressed as $\mathcal{B}_m = \{(S_1,V_1^m),\ldots ,(S_J,V_J^m)\}$ , where $V_{J}^{m}$ is the value for slot $S_{J}$ at turn $m$ . DST involves training a model $f:\mathcal{X}_m\oplus S_j\to V_j^m$ , with $\oplus$ denotes simple text concatenation.
|
| 75 |
+
|
| 76 |
+
Overview TaSL includes two key components: (i) Skill Localization, utilizing a new group-wise importance metric to accurately identify the importance distribution of parameters across tasks, and (ii) Skill Consolidation, which employs a novel fine-grained model averaging strategy to integrate model weights from both current and past tasks for effective knowledge transfer. Figure 2 provides a comprehensive overview of TaSL, with the following subsections detailing each component.
|
| 77 |
+
|
| 78 |
+
# 3.1 Importance-aware Skill Localization
|
| 79 |
+
|
| 80 |
+
To address the substantial computational and storage demands imposed by previous parameter-level importance calculation methods (Konishi et al., 2023), we propose a new group-wise metric for evaluating the importance of each skill unit $u$ :
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\mathcal {I} (u) = \frac {1}{d _ {1} \times d _ {2}} \sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} s \left(w _ {i j}\right) \tag {1}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
where $w_{ij}$ denotes the trainable parameters, and $d_1 \times d_2$ represents the total parameter count in a skill unit $u$ . $\mathcal{I}(u)$ measures the collective importance of all parameters within each skill unit, where higher values signify increased importance. The function $s(\cdot)$ is a designated importance function for individual parameters, defined as the magnitude of the gradient-weight product:
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
I \left(w _ {i j}\right) = \left| w _ {i j} \nabla_ {w _ {i j}} \mathcal {L} \right| \tag {2}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
This approximates the loss change when a parameter is zeroed out. If removing a parameter has a significant influence, then the model is sensitive to it, and we should retain it (Liang et al., 2021).
|
| 93 |
+
|
| 94 |
+
However, sensitivity in Eq. (2) may not reliably indicate importance (Zhang et al., 2022b). This metric, calculated from a sampled mini-batch, suffers from variability due to stochastic sampling and training dynamics, introducing large uncertainty in estimating sensitivity. To mitigate this, we apply sensitivity smoothing and uncertainty quantification (Zhang et al., 2022a):
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\bar {I} ^ {(t)} \left(w _ {i j}\right) = \alpha_ {1} \bar {I} ^ {(t - 1)} \left(w _ {i j}\right) + (1 - \alpha_ {1}) I ^ {(t)} \left(w _ {i j}\right) \tag {3}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
Figure 2: Overview of TaSL. Step 1: We compute the importance scores of skill units for the current task $\mathcal{T}_k$ using our importance-aware skill localization method during fine-tuning. Step 2: Based on a fine-grained model averaging strategy, the skill consolidation method merges the model $\hat{f}_{k - 1}$ , which accumulates knowledge of all previous tasks, with the current task's model $f_{k}$ . The integration is guided by the importance distributions of skill units across various tasks. We then update the cumulative importance scores for all skill units until task $\mathcal{T}_k$ using Eq. (6). This process is designed to be iteratively repeated with the introduction of each subsequent task.
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\bar {U} ^ {(t)} \left(w _ {i j}\right) = \alpha_ {2} \bar {U} ^ {(t - 1)} \left(w _ {i j}\right) +
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
(1 - \alpha_ {2}) \left| I ^ {(t)} \left(w _ {i j}\right) - \bar {I} ^ {(t)} \left(w _ {i j}\right) \right| \tag {4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $\alpha_{1}$ and $\alpha_{2}$ are smoothing factors, and $t$ is the iteration number. $\bar{I}^{(t)}$ represents smoothed sensitivity by exponential moving average and $\bar{U}^{(t)}$ is the uncertainty term quantified by the local variation between $I^{(t)}$ and $\bar{I}^{(t)}$ . Importance is then defined by multiplying $\bar{I}^{(t)}$ and $\bar{U}^{(t)}$ , providing a more accurate importance assessment for $s(\cdot)$ :
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
s ^ {(t)} \left(w _ {i j}\right) = \bar {I} ^ {(t)} \left(w _ {i j}\right) \cdot \bar {U} ^ {(t)} \left(w _ {i j}\right) \tag {5}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
Calculating current task importance scores. To compute the importance score of each skill unit for task $\mathcal{T}_k$ , we employ Eq. (1) during finetuning. The model $f$ with $n$ skill units is denoted as $\mathcal{U} = \{u_1,\dots ,u_n\}$ , with their importance scores for task $\mathcal{T}_k$ denoted by $\mathcal{I}(\mathcal{U}_k)\in \mathbb{R}^n$ . The detailed computation process is provided in Algorithm 1.
|
| 118 |
+
|
| 119 |
+
Computing accumulated importance scores for previous tasks. After computing importance scores for each skill unit at the current task $\mathcal{T}_k$ , it is essential to compare these with scores from all previously learned tasks to distinguish between task-specific and task-shared parameters. To avoid the inefficiency of storing scores for each past task, we aggregate importance scores from all prior tasks into a cumulative score for tasks up to $\mathcal{T}_{k-1}$ . This method allows for the iterative refinement of accumulated scores without separately saving past task scores. The skill units with these cumulative scores
|
| 120 |
+
|
| 121 |
+
up to $\mathcal{T}_{k - 1}$ are denoted as $\hat{\mathcal{U}}_{k - 1}$ , calculated using:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\mathcal {I} \left(\hat {\mathcal {U}} _ {k - 1}\right) = \beta \operatorname {N o r m} \left(\mathcal {I} \left(\hat {\mathcal {U}} _ {k - 2}\right)\right) + \tag {6}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
(1 - \beta) \operatorname {N o r m} (\mathcal {I} \left(\mathcal {U} _ {k - 1}\right))
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where $\beta \in [0,1]$ , and $\mathrm{Norm}(\cdot)$ normalizes importance scores to the $[0,1]$ range, thus resolving discrepancies across models. The initial scores, $\mathcal{I}(\hat{\mathcal{U}}_1)$ , are set to be equal to $\mathcal{I}(\mathcal{U}_1)$ . Following this, the importance distribution for skill units up to task $\mathcal{T}_{k-1}$ is combined with that of the current task, $\mathcal{T}_k$ , to facilitate the skill consolidation process.
|
| 132 |
+
|
| 133 |
+
# 3.2 Skill Consolidation
|
| 134 |
+
|
| 135 |
+
After skill localization, the subsequent vital phase involves consolidating this knowledge into a unified framework. This process demands a sophisticated model averaging approach considering various factors to optimize task performance. Traditional coarse-grained model averaging assumes that all model weights are equally important for the training task (Kirkpatrick et al., 2017; Eddine Marouf et al., 2023), which can be written as the following iterative computation format:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\hat {f} _ {k} = \lambda \hat {f} _ {k - 1} + (1 - \lambda) f _ {k} \tag {7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
However, this method may overemphasize weights irrelevant to the current task, contaminating previously acquired task-specific knowledge and leading to forgetting. To counteract this, we introduce a fine-grained averaging strategy focusing
|
| 142 |
+
|
| 143 |
+
Algorithm 1 Importance-aware Skill Localization
|
| 144 |
+
|
| 145 |
+
Input: Training dataset $\mathcal{D}_k$ for task $\mathcal{T}_k$ ; total training iterations $T$ ; hyperparameters $\alpha_{1}, \alpha_{2}$ .
|
| 146 |
+
|
| 147 |
+
for $t = 1,\dots ,T$ do
|
| 148 |
+
|
| 149 |
+
Sample a mini-batch from $\mathcal{D}_k$ and compute the gradient $\nabla \mathcal{L}$
|
| 150 |
+
|
| 151 |
+
Compute the sensitivity $I(w_{ij})$ via Eq. (2);
|
| 152 |
+
|
| 153 |
+
Update $\bar{I}^{(t)}$ via Eq. (3) and $\bar{U}^{(t)}$ via Eq. (4);
|
| 154 |
+
|
| 155 |
+
end for
|
| 156 |
+
|
| 157 |
+
Compute the importance score $\mathcal{I}(u_i^k)$ for each skill unit $u_{i}^{k}$ by Eq. (1), for $i = 1,\ldots ,n$
|
| 158 |
+
|
| 159 |
+
Output: $f_{k}$ and importance scores $\mathcal{I}(\mathcal{U}_k)$ for $\mathcal{U}_k$ .
|
| 160 |
+
|
| 161 |
+
on skill units rather than the entire model. Our approach distinguishes between task-shared and task-specific skill units, categorically applying weighted averaging to parameters within each skill unit.
|
| 162 |
+
|
| 163 |
+
We initially set importance thresholds $\delta$ using quantiles to select the top $20\%$ of skill units based on importance scores. A skill unit $u_{i}^{k}$ is deemed important (denoted as $(u_{i}^{k})^{+}$ ) if its score $\mathcal{I}(u_{i}^{k})$ is above $\delta_{k}$ , and unimportant $((u_{i}^{k})^{-})$ otherwise.
|
| 164 |
+
|
| 165 |
+
Our fine-grained averaging strategy customizes parameter combination for each skill unit, based on its importance under different tasks, as follows:
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\hat {u} _ {i} ^ {k} = \left\{ \begin{array}{l l} \gamma \hat {u} _ {i} ^ {k - 1} + (1 - \gamma) u _ {i} ^ {k}, & \text {i f} (\hat {u} _ {i} ^ {k - 1}) ^ {+}, (u _ {i} ^ {k}) ^ {+} \\ \hat {u} _ {i} ^ {k - 1}, & \text {i f} (\hat {u} _ {i} ^ {k - 1}) ^ {+}, (u _ {i} ^ {k}) ^ {-} \\ u _ {i} ^ {k}, & \text {i f} (\hat {u} _ {i} ^ {k - 1}) ^ {-}, (u _ {i} ^ {k}) ^ {+} \\ \frac {1}{2} (\hat {u} _ {i} ^ {k - 1} + u _ {i} ^ {k}), & \text {i f} (\hat {u} _ {i} ^ {k - 1}) ^ {-}, (u _ {i} ^ {k}) ^ {-} \end{array} \right. \tag {8}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
This strategy performs the element-wise adjustment of parameters within each skill unit based on its relevance to previous and current tasks, using hyperparameter $\gamma$ to control their influences.
|
| 172 |
+
|
| 173 |
+
In the scenario where a skill unit $u_{i}$ is significant for both past and present tasks (case 1), we integrate newly acquired knowledge into this task-shared skill unit to enable backward KT. If a skill unit $u_{i}$ is crucial solely for previous tasks (case 2), we maintain the knowledge within this previous task-specific skill unit untouched to prevent the contamination of historical knowledge with task-irrelevant information. In contrast, for a skill unit important only to the current task (case 3), since the model $f_{k}$ is trained on the initialization of $\hat{f}_{k - 1}$ , the historically learned knowledge is utilized to enhance the performance of the current task, enabling forward KT. Thus, we ensure the integrity of parameters within this current task-specific skill unit,
|
| 174 |
+
|
| 175 |
+
preserving essential knowledge for excelling in the new task. We adopt a straightforward averaging for units not pertinent to either task (case 4).
|
| 176 |
+
|
| 177 |
+
Skill consolidation is performed before starting a new task in CL, utilizing the averaged model for subsequent task initialization. Only the importance scores of $\hat{\mathcal{U}}_{k - 1}$ and $\mathcal{U}_k$ are retained for use between tasks, starting with $\hat{\mathcal{U}}_1 = \mathcal{U}_1$ estimated from $f_{1}$ on $D_{1}$ . Detailed implementation of TaSL algorithm can be found in the Appendix (Algorithm 2).
|
| 178 |
+
|
| 179 |
+
# 4 Experiments and Analysis
|
| 180 |
+
|
| 181 |
+
Dataset We use the continual learning for DST setup proposed by Zhu et al. (2022), which uses 15 single domains from the Schema-Guided Dialog dataset (SGD) (Rastogi et al., 2020). We aggregate our results over the same five domain orders to make the most reliable comparisons with prior works. Comparing results with the same order is crucial as the results can have significant variance depending on the chosen domains and their order. More details about data statistics, task selection, and orderings can be found in the Appendix A.
|
| 182 |
+
|
| 183 |
+
Evaluation Protocol We evaluate DST performance using the widely adopted Joint Goal Accuracy (JGA) metric (Wu et al., 2019), which indicates the percentage of turns for which all slot values are correctly predicted. We denote $a_{j,i}$ as the JGA on the test set of task $\mathcal{T}_i$ right after training on task $\mathcal{T}_j$ . The performance of Continual DST is assessed using three metrics from Zhu et al. (2022):
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\mathbf {A v g . J G A} = \frac {1}{K} \sum_ {i = 1} ^ {K} a _ {K, i} \tag {9}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\mathbf {F W T} = \frac {1}{K - 1} \sum_ {i = 2} ^ {K} a _ {i - 1, i} \tag {10}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\mathbf {B W T} = \frac {1}{K - 1} \sum_ {i = 1} ^ {K - 1} a _ {K, i} - a _ {i, i} \tag {11}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
Avg. JGA represents the average JGA across all tasks after training on the final task $\mathcal{T}_K$ . Forward Transfer (FWT) evaluates a model's generalization ability by measuring the averaged zero-shot performance. Backward Transfer (BWT) assesses the impact of learning on subsequent tasks on a previous task. Negative BWT indicates the model lost some previously acquired knowledge.
|
| 198 |
+
|
| 199 |
+
<table><tr><td>Method</td><td>Memory-Free</td><td>Avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>Fine-tuning (Madotto et al., 2020)</td><td></td><td>44.10.9</td><td>8.31.0</td><td>-36.63.9</td></tr><tr><td>EWC (Kirkpatrick et al., 2017)</td><td></td><td>47.91.1</td><td>8.40.9</td><td>-38.14.1</td></tr><tr><td>AdapterCL (Madotto et al., 2020)</td><td>✓</td><td>49.81.7</td><td>-</td><td>-</td></tr><tr><td>DST-EGQA (Cho et al., 2023)</td><td></td><td>55.53.5</td><td>23.62.1</td><td>-19.14.2</td></tr><tr><td>RoS (Feng et al., 2024)</td><td></td><td>59.03.9</td><td>25.52.0</td><td>-17.93.7</td></tr><tr><td>TaSL (ours)</td><td></td><td>62.12.0</td><td>26.61.5</td><td>-9.12.2</td></tr><tr><td>Replay (Madotto et al., 2020)</td><td></td><td>58.63.5</td><td>10.90.5</td><td>-3.22.3</td></tr><tr><td>CPT (Zhu et al., 2022)</td><td>✘</td><td>61.22.5</td><td>13.70.8</td><td>0.50.4</td></tr><tr><td>DST-EGQA (Cho et al., 2023)</td><td></td><td>68.90.3</td><td>22.51.8</td><td>-5.91.9</td></tr><tr><td>RoS (Feng et al., 2024)</td><td></td><td>72.10.8</td><td>26.72.0</td><td>-2.61.5</td></tr><tr><td>CPT Multi-task (Zhu et al., 2022)</td><td></td><td>64.01.9</td><td>-</td><td>-</td></tr><tr><td>DST-EGQA Multi-task (Cho et al., 2023)</td><td>-</td><td>74.21.8</td><td>-</td><td>-</td></tr><tr><td>RoS Multi-task (Cho et al., 2023)</td><td></td><td>76.30.3</td><td>-</td><td>-</td></tr></table>
|
| 200 |
+
|
| 201 |
+
Table 1: CL results of various methods, all utilizing the same T5-small backbone, on 15 different tasks from the SGD dataset. Means and standard variances are reported across five domain permutations. The last two rows provide the multi-tasking results, which serve as an upper bound. Our memory replay-free TaSL outperforms the previous best method, RoS, by achieving a $3.1\%$ absolute improvement on avg. JGA and an $8.8\%$ absolute increase in BWT. Additionally, TaSL exceeds the performance of the majority of memory replay methods and nearly matches the upper bound of the CPT multi-task method.
|
| 202 |
+
|
| 203 |
+
Baselines We evaluate our method with the following Continual DST baselines: Fine-tuning: Continuously fine-tune the backbone model on new task data. Replay: Randomly save $|\mathcal{M}|$ samples from the training set of each previous task $\mathcal{T}_i$ in memory $\mathcal{M}_i$ and jointly train the model on new task data $\mathcal{D}_K$ and memory $\mathcal{M}_{<k}$ . EWC: Maintain a memory but leverage it to compute the Fisher information matrix for regularization (Kirkpatrick et al., 2017). AdapterCL: Freeze the pre-trained model and independently train a residual Adapter (Houlsby et al., 2019) for each task (Madotto et al., 2020). Continual Prompt Tuning (CPT) (Zhu et al., 2022): Freeze the backbone model and continually train soft prompts with memory-guided knowledge transfer in both forward and backward directions. DST-EGQA (Cho et al., 2023): Reformulate DST as a QA task to mitigate forgetting with retrieval-augmented in-context learning. RoS (Feng et al., 2024): Utilize knowledge distillation to enhance the meta-reasoning ability of student models, thereby mitigating forgetting.
|
| 204 |
+
|
| 205 |
+
Training Details We utilize four distinct parameter-level backbones for experiments: T5-small (Raffel et al., 2020), T5-base, FlanT5-large (Chung et al., 2022), and LLaMA-7B (Touvron et al., 2023). For the T5 series model, we perform full fine-tuning across all parameters. For LLaMA-7B, we adopt the Parameter-Efficient
|
| 206 |
+
|
| 207 |
+
Fine-Tuning technique, specifically Low-Rank Adaptation (LoRA) (Hu et al., 2021), to expedite the training process. For TaSL, we set the hyperparameters $\alpha_{1}$ and $\alpha_{2}$ in Eq. (3) and Eq. (4) to 0.85, and set $\beta$ in Eq. (6) and $\gamma$ in Eq. (8) to 0.7. The memory size per task $|\mathcal{M}|$ is maintained at 50, aligning with previous studies. Detailed training settings are provided in Appendix B.
|
| 208 |
+
|
| 209 |
+
Following this, we compare TaSL with baselines in Sec. 4.1, and present a comprehensive ablation study in Sec. 4.2. Subsequently, we delve deeper into the underlying success of our proposed importance-aware skill localization (Sec. 4.3) and skill consolidation techniques (Sec. 4.4), and get some insightful findings from this exploration.
|
| 210 |
+
|
| 211 |
+
# 4.1 Main Results
|
| 212 |
+
|
| 213 |
+
Overall CL results of different methods at the same T5-small backbone are summarized in Table 1.
|
| 214 |
+
|
| 215 |
+
TaSL demonstrates superior CL performance through effective knowledge transfer. Unlike vanilla fine-tuning, which suffers from catastrophic forgetting, our approach demonstrates a substantial improvement in Avg. JGA, increasing it from $44.1\%$ to $62.1\%$ , and shows marked advancements in both FWT and BWT.
|
| 216 |
+
|
| 217 |
+
TaSL not only exceeds the CPT method, which relies on memory replay, advancing the Avg. JGA from $61.2\%$ to $62.1\%$ , but also establishes a new
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
Figure 3: Performance of TaSL w/ different backbones.
|
| 221 |
+
|
| 222 |
+
SOTA across all metrics against the top baseline, DST-EGQA. We achieve an impressive increase in Avg. JGA from $59.0\%$ to $62.1\%$ (3.1% absolute improvement) and elevate BWT from $-17.9\%$ to $-9.1\%$ , exceeding it by more than $8\%$ and displaying robust backward KT capabilities. Additionally, TaSL obtains the highest FWT scores across all baselines under various conditions.
|
| 223 |
+
|
| 224 |
+
Remarkably, without relying on memory replay, our method nearly matches the performance of DST-EGQA with memory replay, particularly in BWT, with a minimal difference of $3.2\%$ $(-9.1\%$ vs. $-5.9\%)$ . Moreover, our Avg. JGA score closely approaches the upper bound performance set by the CPT multi-task strategy at $64\%$ , underscoring the effectiveness of our fine-grained model averaging strategy that meticulously accounts for domain-shared and domain-specific parameters.
|
| 225 |
+
|
| 226 |
+
TaSL consistently demonstrates superior performance across various backbones. To further substantiate the effectiveness of our framework, we conducted experiments using a variety of parameter-level backbones, illustrated in Figure 3, highlighting performance gains with increasing model size. Notably, TaSL achieves a breakthrough by recording a positive BWT score on LLaMA7B, without employing any memory replay techniques. Across different backbones, our method consistently outperformed traditional approaches. For instance, in Flan-T5-large, TaSL significantly boosts the Avg. JGA metric from $56\%$ to $74\%$ , also achieving the most substantial improvements in both FWT and BWT metrics — rising from $33\%$ to $43\%$ and improving from $-28\%$ to $-13\%$ , respectively. These results further validate the generality of our proposed framework.
|
| 227 |
+
|
| 228 |
+
Fine-grained model averaging can effectively mitigate catastrophic forgetting. To rigorously evaluate our model's effectiveness in counteracting forgetting, we analyzed its performance on the
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
Figure 4: Performance trajectory of Task 1 during the Continual DST learning process.
|
| 232 |
+
|
| 233 |
+
initial task after training on subsequent tasks. Figure 4 illustrates that our method results in a notably slower forgetting rate, manifesting as a nearly $8\%$ average decrease in performance after training on the last task. This contrasts sharply with vanilla backbones, which display a substantial performance reduction of $20\%$ on average, thereby underscoring our method's superior capacity to mitigate forgetting. Moreover, an intriguing observation is the enhancement in performance on task 1 after training on task 3, highlighting our model's effective backward KT ability.
|
| 234 |
+
|
| 235 |
+
# 4.2 Ablation Study
|
| 236 |
+
|
| 237 |
+
This section we assess the effects of importance-aware skill localization and fine-grained skill consolidation, with the results discussed below. For hyperparameter sensitivity, see Appendix D.
|
| 238 |
+
|
| 239 |
+
Various importance scoring methods for skill localization. Our method calculates importance scores by Eq. (1). As shown in Table 2, we explore alternative importance scoring approaches: (i) modifying $s(\cdot)$ in Eq. (1) only to include sensitivity, as in Eq. (2); and (ii) using absolute gradients, $|\nabla_{w_{ij}}\mathcal{L}|$ , for importance assessment (Michel et al., 2019). The results indicate that using moving averages for importance scoring outperforms the alternatives, with the other two variants leading to a maximum performance decrease by $3.26\%$ , $2.24\%$ , and $4.11\%$ across the three metrics. This highlights the value of accurate skill localization in improving model performance.
|
| 240 |
+
|
| 241 |
+
Fine-grained vs. coarse-grained model averaging for skill consolidation. We compared our fine-grained averaging strategy against two coarse-grained strategies: (i) Weight-Ensemble, which averages weights uniformly as per Eq. (7), and
|
| 242 |
+
|
| 243 |
+
<table><tr><td>Method</td><td>Avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>vanilla T5-small</td><td>44.10</td><td>8.32</td><td>-36.63</td></tr><tr><td>s (·) = I (·)</td><td>60.48</td><td>24.39</td><td>-10.81</td></tr><tr><td>s (·) = |∇wijL|</td><td>58.82</td><td>24.80</td><td>-13.22</td></tr><tr><td>TaSL (ours)</td><td>62.08</td><td>26.63</td><td>-9.11</td></tr></table>
|
| 244 |
+
|
| 245 |
+
Table 2: Ablation study. Evaluating the impact of importance scoring variations on skill localization.
|
| 246 |
+
|
| 247 |
+
<table><tr><td>Method</td><td>Avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>vanilla T5-small</td><td>44.10</td><td>8.32</td><td>-36.63</td></tr><tr><td>Weight-Ens.</td><td>53.23</td><td>21.73</td><td>-18.28</td></tr><tr><td>EMA</td><td>52.56</td><td>22.27</td><td>-16.80</td></tr><tr><td>Fine-grained (ours)</td><td>62.08</td><td>26.63</td><td>-9.11</td></tr></table>
|
| 248 |
+
|
| 249 |
+
(ii) Exponential Moving Average (EMA) (Szegedy et al., 2016), applying a running average of parameters at each fine-tuning iteration. Results are detailed in Table 3.
|
| 250 |
+
|
| 251 |
+
Weight-Ensemble significantly improves upon the vanilla model, highlighting coarse-grained averaging's benefits for Continual DST. EMA generally surpasses Weight-Ensemble but falls short of our fine-grained approach due to its overuse of averaging, with frequent parameter adjustments within the same task possibly resulting in less optimal outcomes. Our method solely averages weights after each task, enhancing computational efficiency.
|
| 252 |
+
|
| 253 |
+
# 4.3 Visualization of Skill Units
|
| 254 |
+
|
| 255 |
+
We visualized the distribution of importance scores for the skill units across tasks and models, as shown in Figure 5, leading to several critical insights:
|
| 256 |
+
|
| 257 |
+
- There is a noticeable variation in the importance of skill units for the same task, with important skill units in LLaMA-7B making up about $20\%$ of all trainable LoRA parameters.
|
| 258 |
+
- The distribution of important skill units is task-dependent, indicating both task-shared and specific parameters, confirming TaSL's validity.
|
| 259 |
+
- Lower layers, nearer to the input, are more crucial for the DST task compared to upper layers.
|
| 260 |
+
- Within each layer, the importance of the attention layer, especially the V (value) and O (output) matrices, consistently exceeds that of the Q (query), K (key) matrices, and the MLP layer.
|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
|
| 264 |
+

|
| 265 |
+
Figure 5: Visualization of importance scores for skill units across different backbone models and tasks.
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
|
| 271 |
+
Table 3: Ablation study. Comparing coarse- and fine-grained model averaging methods on skill consolidation.
|
| 272 |
+
|
| 273 |
+
<table><tr><td>Seq. length</td><td>Method</td><td>Task1</td><td>Task2</td><td>Task3</td></tr><tr><td rowspan="4">2 (LLaMA-7B)</td><td>Upper bound</td><td>92.3</td><td>86.1</td><td>-</td></tr><tr><td>Fine-tuning</td><td>73.1</td><td>86.1</td><td>-</td></tr><tr><td>Coarse-grained</td><td>82.3</td><td>71.3</td><td>-</td></tr><tr><td>TaSL (ours)</td><td>86.9</td><td>83.3</td><td>-</td></tr><tr><td rowspan="4">3 (T5-small)</td><td>Upper bound</td><td>89.2</td><td>81.5</td><td>64.4</td></tr><tr><td>Fine-tuning</td><td>53.1</td><td>62.0</td><td>64.4</td></tr><tr><td>Coarse-grained</td><td>58.5</td><td>64.6</td><td>43.7</td></tr><tr><td>TaSL (ours)</td><td>80.0</td><td>74.1</td><td>63.8</td></tr></table>
|
| 274 |
+
|
| 275 |
+
Table 4: Analysis of knowledge balancing across old and new tasks. All results are reported in JGA(%)
|
| 276 |
+
|
| 277 |
+
# 4.4 Improved Balance in Knowledge Transfer
|
| 278 |
+
|
| 279 |
+
This section evaluates the effectiveness of our fine-grained model averaging method in achieving the optimal balance between preserving previous knowledge and excelling at new tasks in CL, comparing it to coarse-grained approaches.
|
| 280 |
+
|
| 281 |
+
Table 4 shows that for a sequence of two tasks, vanilla fine-tuning on LLaMA-7B results in a notable decline in historical Task 1's performance (from $92.3\%$ to $73.1\%$ ), indicating notable forgetting. Coarse-grained averaging mitigates this to an extent, reducing the decline to $82.3\%$ but impacting new task performance to $71.3\%$ . Our method effectively lessens forgetting (improving to $86.9\%$ ) while also maintaining better performance on Task 2, with less than a $3\%$ reduction.
|
| 282 |
+
|
| 283 |
+
As tasks increase to three, our method more effectively compensates for losses on new tasks with gains on historical tasks. Vanilla fine-tuning on T5-small results in a combined $55.6\%$ drop in Tasks 1 and 2, while our approach only shows a $16.6\%$ decrease and the loss on task 3 is less than $1\%$ due to TaSL's effective KT ability.
|
| 284 |
+
|
| 285 |
+
# 5 Conclusion
|
| 286 |
+
|
| 287 |
+
In this paper, we introduce a novel TaSL method to enhance Continual DST performance by facilitating effective knowledge transfer across tasks. Our approach leverages an innovative importance-aware skill localization technique and a skill consolidation strategy to differentiate between domain-specific and domain-shared parameters, mitigating forgetting. Comprehensive experiments showcase our method's exceptional ability to balance preserving past knowledge and excelling in new tasks.
|
| 288 |
+
|
| 289 |
+
# Limitations
|
| 290 |
+
|
| 291 |
+
TaSL excels at precisely distinguishing between task-specific and shared parameters through importance-aware skill localization. However, the current importance scoring criteria, relying on first-order gradients, may lack precision. The Hessian matrix often captures the actual importance, but computing these second-order gradients incurs significant computational costs. Therefore, future improvements should focus on developing more accurate and efficient skill localization methods.
|
| 292 |
+
|
| 293 |
+
In addition, in skill consolidation, the challenge lies in better integrating model parameters. Under our fine-grained model averaging strategy (Eq. (8)), selecting different weighted combinations could impact the overall performance. Although we investigated the model's sensitivity to various hyperparameter settings (Appendix D), with results showing stable and consistently strong performance across different combinations, there is still potential for further improvement. For instance, developing adaptive methods to select optimal weights or devising more efficient model averaging strategies could further enhance model performance.
|
| 294 |
+
|
| 295 |
+
# Acknowledgments
|
| 296 |
+
|
| 297 |
+
We thank the anonymous reviewers for their valuable feedback. This research was partially supported by the grant of HK ITF ITS/359/21FP.
|
| 298 |
+
|
| 299 |
+
# References
|
| 300 |
+
|
| 301 |
+
Hyundong Cho, Andrea Madotto, Zhaojiang Lin, Khyathi Raghavi Chandu, Satwik Kottur, Jing Xu, Jonathan May, and Chinnadhurai Sankar. 2023. Continual dialogue state tracking via example-guided question answering. arXiv preprint arXiv:2305.13721.
|
| 302 |
+
|
| 303 |
+
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
|
| 304 |
+
Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione, and Stéphane Lathuilière. 2023. Weighted ensemble models are strong continual learners. arXiv e-prints, pages arXiv-2312.
|
| 305 |
+
Yujie Feng, Bo Liu, Xiaoyu Dong, Zexin Lu, Li-Ming Zhan, Xiao-Ming Wu, and Albert YS Lam. 2024. Continual dialogue state tracking via reason-of-select distillation. In *Findings of the Association for Computational Linguistics: ACL* 2024.
|
| 306 |
+
Yujie Feng, Zexin Lu, Bo Liu, Liming Zhan, and Xiao-Ming Wu. 2023a. Towards llm-driven dialogue state tracking. arXiv preprint arXiv:2310.14970.
|
| 307 |
+
Yujie Feng, Jiangtao Wang, Yasha Wang, and Xu Chu. 2022. Spatial-attention and demographic-augmented generative adversarial imputation network for population health data reconstruction. IEEE Transactions on Big Data.
|
| 308 |
+
Yujie Feng, Jiangtao Wang, Yasha Wang, and Xu Chu. 2023b. Towards sustainable compressive population health: a gan-based year-by-year imputation method. ACM Transactions on Computing for Healthcare, 4(1):1-18.
|
| 309 |
+
Yujie Feng, Jiangtao Wang, Yasha Wang, and Sumi Helal. 2021. Completing missing prevalence rates for multiple chronic diseases by jointly leveraging both intra-and inter-disease population health data correlations. In Proceedings of the Web Conference 2021, pages 183-193.
|
| 310 |
+
Binzong Geng, Fajie Yuan, Qiancheng Xu, Ying Shen, Ruifeng Xu, and Min Yang. 2021. Continual learning for task-oriented dialogue system with iterative network pruning, expanding and masking. arXiv preprint arXiv:2107.08173.
|
| 311 |
+
Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. 2019. Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 831-839.
|
| 312 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR.
|
| 313 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
|
| 314 |
+
|
| 315 |
+
Zixuan Ke and Bing Liu. 2022. Continual learning of natural language processing tasks: A survey. arXiv preprint arXiv:2211.12701.
|
| 316 |
+
Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, and Lei Shu. 2021. Achieving forgetting prevention and knowledge transfer in continual learning. Advances in Neural Information Processing Systems, 34:22443-22456.
|
| 317 |
+
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526.
|
| 318 |
+
Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, and Bing Liu. 2023. Parameter-Level Soft-Masking for Continual Learning. In Proc. of ICML.
|
| 319 |
+
Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947.
|
| 320 |
+
Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, and Weizhu Chen. 2021. Super tickets in pre-trained language models: From model compression to improving generalization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6524-6538.
|
| 321 |
+
Bo Liu, Liming Zhan, Zexin Lu, Yujie Feng, Lei Xue, and Xiao-Ming Wu. 2023. How good are large language models at out-of-distribution detection? arXiv preprint arXiv:2308.10261.
|
| 322 |
+
Qingbin Liu, Pengfei Cao, Cao Liu, Jiansong Chen, Xunliang Cai, Fan Yang, Shizhu He, Kang Liu, and Jun Zhao. 2021. Domain-lifelong learning for dialogue state tracking via knowledge preservation networks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2301-2311.
|
| 323 |
+
Zexin Lu, Keyang Ding, Yuji Zhang, Jing Li, Baolin Peng, and Lemao Liu. 2021a. Engage the public: Poll question generation for social media posts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 29-40.
|
| 324 |
+
Zexin Lu, Jing Li, Yingyi Zhang, and Haisong Zhang. 2021b. Getting your conversation on track: Estimation of residual life for conversations. In 2021 IEEE Spoken Language Technology Workshop (SLT), pages 1036-1043. IEEE.
|
| 325 |
+
|
| 326 |
+
Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, and Zhiguang Wang. 2020. Continual learning in task-oriented dialogue systems. arXiv preprint arXiv:2012.15504.
|
| 327 |
+
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pages 109-165. Elsevier.
|
| 328 |
+
Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? Advances in neural information processing systems, 32.
|
| 329 |
+
Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, and Erik Cambria. 2023. Recent advances in deep learning based dialogue systems: A systematic survey. Artificial intelligence review, 56(4):3055-3155.
|
| 330 |
+
Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, and Sanjeev Arora. 2023. Task-specific skill localization in fine-tuned language models. arXiv preprint arXiv:2302.06600.
|
| 331 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551.
|
| 332 |
+
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8689-8696.
|
| 333 |
+
Victor Sanh, Thomas Wolf, and Alexander M Rush. 2020. Movement pruning: adaptive sparsity by fin-tuning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pages 20378-20389.
|
| 334 |
+
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826.
|
| 335 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 336 |
+
Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808-819.
|
| 337 |
+
|
| 338 |
+
Yongxin Xu, Xu Chu, Kai Yang, Zhiyuan Wang, Peinie Zou, Hongxin Ding, Junfeng Zhao, Yasha Wang, and Bing Xie. 2023a. Seqcare: Sequential training with external medical knowledge graph for diagnosis prediction in healthcare data. In Proceedings of the ACM Web Conference 2023, pages 2819-2830.
|
| 339 |
+
Yongxin Xu, Kai Yang, Chaohe Zhang, Peinie Zou, Zhiyuan Wang, Hongxin Ding, Junfeng Zhao, Yasha Wang, and Bing Xie. 2023b. Vecocare: visit sequences-clinical notes joint learning for diagnosis prediction in healthcare data. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pages 4921-4929.
|
| 340 |
+
Kai Yang, Yongxin Xu, Peinie Zou, Hongxin Ding, Junfeng Zhao, Yasha Wang, and Bing Xie. 2023. Kerprint: local-global knowledge graph enhanced diagnosis prediction for retrospective and prospective interpretations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 5357-5365.
|
| 341 |
+
Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2022a. Adaptive budget allocation for parameter-efficient fine-tuning. In The Eleventh International Conference on Learning Representations.
|
| 342 |
+
Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2022b. Platon: Pruning large transformer models with upper confidence bound of weight importance. In International Conference on Machine Learning, pages 26809-26823. PMLR.
|
| 343 |
+
Qi Zhu, Bing Li, Fei Mi, Xiaoyan Zhu, and Minlie Huang. 2022. Continual prompt tuning for dialog state tracking. arXiv preprint arXiv:2203.06654.
|
| 344 |
+
|
| 345 |
+
# A Dataset Statistics
|
| 346 |
+
|
| 347 |
+
Here, we offer a detailed description of the dataset used in Continual DST. Table 5 displays the number of slots for each of the 15 services used in our experiments and the count of samples in the training, validation, and test sets. Table 6 illustrates the training sequence for these 15 tasks in the context of continual learning.
|
| 348 |
+
|
| 349 |
+
# B Implementation Details
|
| 350 |
+
|
| 351 |
+
Definition of Skill Units In our Importance-aware Skill Localization technique, to circumvent the high computational resource demands of parameter-level localization, we introduced a novel group-wise metric, redefining the skill unit $u$ as the basic element for computing importance scores. However, the division of skill units varies across different backbone models due to variations in parameter quantities and architectural designs (e.g.,
|
| 352 |
+
|
| 353 |
+
decoder-only and encoder-decoder architectures). The specific distinctions are as follows:
|
| 354 |
+
|
| 355 |
+
- Encoder-decoder architecture backbones. For these backbones (Feng et al., 2021), such as T5-small, T5-base, and T5-large, we perform full-parameter fine-tuning during training. For organizational simplicity, we divided the models based on the different functionalities within the transformer blocks, as depicted in Table 7. For instance, in T5-small, with both encoder and decoder comprising 6 transformer blocks each, the total comes to 131 skill units for T5-small, 257 skill units for T5-base, and 558 for T5-large.
|
| 356 |
+
- Decoder-only architecture backbone. The LLaMA-7B model we utilized falls into this category (Feng et al., 2023b). Leveraging Parameter-efficient Fine-tuning Techniques (PEFT) to expedite training, we treat the matrices A and B in LoRA as individual basic skill units. And we add LoRA adapters to attention layers in LLaMA. Each layer, as detailed in Table 8, comprises 8 distinct skill units. Given that LLaMA-7B consists of 32 layers, it is thereby segmented into 256 skill units in total.
|
| 357 |
+
|
| 358 |
+
Model training details For different backbones, we utilized the following hyperparameters:
|
| 359 |
+
|
| 360 |
+
- T5-small (60M), T5-base (220M), and FLAN-T5-lare (780M): Training was conducted with a learning rate of 3e-4, batch size of 8, maximum input length of 512, maximum target length of 128, and 5 epochs.
|
| 361 |
+
- LLaMA (7B): Utilizing LORA for efficiency, with a learning rate of 3e-4, batch size of 128, a cutoff length of 512, and 5 epochs. Lora settings were $r = 8$ , alpha = 16, dropout = 0.05, targeting modules [[q_proj,k_proj,v_proj,o_proj]]. For testing, settings included temperature = 0.02, top_p = 0, top_k = 1, num_beams = 1, max_new_tokens = 128.
|
| 362 |
+
|
| 363 |
+
Experiments are carried out using 2 NVIDIA A100 with 80GB memory. Results are averaged across five different task orders and include the standard error in the tables and plots provided (Feng et al., 2022).
|
| 364 |
+
|
| 365 |
+
# C Additional Results
|
| 366 |
+
|
| 367 |
+
To further validate TaSL's effectiveness in more complex continual learning scenarios, we have conducted additional experiments to verify its performance on transitioning between different datasets,
|
| 368 |
+
|
| 369 |
+
<table><tr><td>Task ID</td><td>Service</td><td># Slots</td><td colspan="3"># Dialogs</td><td colspan="3"># Samples</td><td colspan="2">Avg. tokens</td></tr><tr><td></td><td></td><td></td><td>Train</td><td>Dev</td><td>Test</td><td>Train</td><td>Dev</td><td>Test</td><td>Context</td><td>Query</td></tr><tr><td>30</td><td>services_4</td><td>5</td><td>86</td><td>13</td><td>25</td><td>680</td><td>97</td><td>208</td><td>154</td><td>49</td></tr><tr><td>31</td><td>flights_1</td><td>10</td><td>560</td><td>80</td><td>160</td><td>4680</td><td>667</td><td>1379</td><td>168</td><td>10</td></tr><tr><td>32</td><td>services_3</td><td>5</td><td>131</td><td>19</td><td>38</td><td>959</td><td>143</td><td>290</td><td>143</td><td>54</td></tr><tr><td>33</td><td>flights_3</td><td>8</td><td>65</td><td>10</td><td>19</td><td>420</td><td>75</td><td>116</td><td>133</td><td>79</td></tr><tr><td>34</td><td>trains_1</td><td>7</td><td>58</td><td>9</td><td>17</td><td>415</td><td>67</td><td>117</td><td>131</td><td>76</td></tr><tr><td>35</td><td>homes_2</td><td>8</td><td>62</td><td>9</td><td>18</td><td>424</td><td>56</td><td>139</td><td>140</td><td>89</td></tr><tr><td>36</td><td>rentalcars_2</td><td>6</td><td>77</td><td>11</td><td>23</td><td>631</td><td>91</td><td>185</td><td>157</td><td>61</td></tr><tr><td>37</td><td>restaurants_1</td><td>9</td><td>256</td><td>37</td><td>74</td><td>2098</td><td>297</td><td>581</td><td>153</td><td>10</td></tr><tr><td>38</td><td>music_1</td><td>6</td><td>68</td><td>10</td><td>20</td><td>468</td><td>73</td><td>142</td><td>118</td><td>61</td></tr><tr><td>39</td><td>hotels_4</td><td>7</td><td>80</td><td>12</td><td>23</td><td>559</td><td>99</td><td>141</td><td>134</td><td>72</td></tr><tr><td>40</td><td>media_2</td><td>5</td><td>32</td><td>4</td><td>10</td><td>215</td><td>29</td><td>71</td><td>112</td><td>59</td></tr><tr><td>41</td><td>hotels_3</td><td>6</td><td>90</td><td>13</td><td>26</td><td>737</td><td>100</td><td>193</td><td>157</td><td>64</td></tr><tr><td>42</td><td>rentalcars_3</td><td>7</td><td>44</td><td>7</td><td>13</td><td>332</td><td>55</td><td>99</td><td>148</td><td>72</td></tr><tr><td>43</td><td>hotels_1</td><td>7</td><td>99</td><td>14</td><td>29</td><td>868</td><td>105</td><td>250</td><td>161</td><td>71</td></tr><tr><td>44</td><td>homes_1</td><td>7</td><td>244</td><td>35</td><td>70</td><td>1829</td><td>282</td><td>540</td><td>159</td><td>81</td></tr></table>
|
| 370 |
+
|
| 371 |
+
Table 5: Statistics of the 15 services we used in experiments.
|
| 372 |
+
|
| 373 |
+
<table><tr><td>Task order</td><td colspan="15">Tasks' IDs in order</td></tr><tr><td>Order1</td><td>30</td><td>31</td><td>32</td><td>33</td><td>34</td><td>35</td><td>36</td><td>37</td><td>38</td><td>39</td><td>40</td><td>41</td><td>42</td><td>43</td><td>44</td></tr><tr><td>Order2</td><td>39</td><td>33</td><td>36</td><td>42</td><td>40</td><td>37</td><td>38</td><td>34</td><td>32</td><td>35</td><td>41</td><td>31</td><td>30</td><td>44</td><td>43</td></tr><tr><td>Order3</td><td>30</td><td>41</td><td>38</td><td>31</td><td>43</td><td>39</td><td>40</td><td>33</td><td>34</td><td>44</td><td>37</td><td>36</td><td>32</td><td>35</td><td>42</td></tr><tr><td>Order4</td><td>43</td><td>40</td><td>44</td><td>38</td><td>30</td><td>37</td><td>31</td><td>39</td><td>32</td><td>35</td><td>41</td><td>34</td><td>33</td><td>36</td><td>42</td></tr><tr><td>Order5</td><td>30</td><td>33</td><td>44</td><td>31</td><td>38</td><td>32</td><td>42</td><td>40</td><td>37</td><td>43</td><td>36</td><td>39</td><td>41</td><td>35</td><td>34</td></tr></table>
|
| 374 |
+
|
| 375 |
+
Table 6: Five task orders of all our 15 tasks experiments.
|
| 376 |
+
|
| 377 |
+
specifically from SGD to MultiWoz. This involved including another 5 distinct domains from the MultiWoz 2.1 dataset, in addition to the 15 domains in SGD, resulting in a total of 20 domains (i.e., tasks). Table 9 presents the performance of various methods, utilizing T5-small as the backbone.
|
| 378 |
+
|
| 379 |
+
The findings align with the observations from Table 1, although there is a noticeable decrease in the efficacy of all evaluated methods, due to the significant discrepancies in data distribution across the datasets examined. As a memory-free method, our TaSL still significantly outperforms the strongest baseline (i.e., the memory-free version of DST-EGQA) and even surpasses some memory-based methods like "Replay". These findings demonstrate TaSL's robustness and effectiveness, showcasing its capability to handle complex continual learning scenarios.
|
| 380 |
+
|
| 381 |
+
# D Sensitivity Analysis for Hyperparameters
|
| 382 |
+
|
| 383 |
+
The proposed framework incorporates three key hyperparameters, including the $\alpha$ for computing importance scores in Equations (3) and (4), the $\beta$ for calculating cumulative importance scores in Equation (6), and the $\gamma$ for performing weighted averaging within skill units as outlined in Equation (8). Our analysis aims to assess the impact of varying these hyperparameters on our method's performance, testing on the T5-small backbone model.
|
| 384 |
+
|
| 385 |
+
As evidenced in Table 10, we determine that the optimal setting for $\alpha$ is 0.55. An $\alpha$ value too low results in a performance decline, indicating that the calculated importance scores are not sufficiently accurate. Furthermore, as depicted in the results of Tables 11 and 12, we also find that $\beta$ and $\gamma$ values within a normal range do not significantly affect performance. However, excessively high or low values for $\beta$ and $\gamma$ may skew the model towards favoring either past or current task knowledge, thereby disrupting the desired balance.
|
| 386 |
+
|
| 387 |
+
<table><tr><td>Block Type</td><td>Skill Unit Name</td></tr><tr><td rowspan="8">Encoder</td><td>SelfAttention.q.weight</td></tr><tr><td>SelfAttention.k.weight</td></tr><tr><td>SelfAttention.v.weight</td></tr><tr><td>SelfAttention.0.weight</td></tr><tr><td>layer.0(layer_norm.weight</td></tr><tr><td>DenseReluDense.wi.weight</td></tr><tr><td>DenseReluDense.wo.weight</td></tr><tr><td>layer.1(layer_norm.weight</td></tr><tr><td rowspan="11">Decoder</td><td>SelfAttention.q.weight</td></tr><tr><td>SelfAttention.k.weight</td></tr><tr><td>SelfAttention.v.weight</td></tr><tr><td>SelfAttention.0.weight</td></tr><tr><td>SelfAttention(relativeattention.bias.weight</td></tr><tr><td>layer.0(layer_norm.weight</td></tr><tr><td>layer.1.EncDecAttention.q.weight</td></tr><tr><td>layer.1.EncDecAttention.k.weight</td></tr><tr><td>layer.1.EncDecAttention.v.weight</td></tr><tr><td>layer.1.EncDecAttention.o.weight</td></tr><tr><td>1(layer_norm.weight</td></tr></table>
|
| 388 |
+
|
| 389 |
+
Table 7: Definition of skill units for encoder-decoder architecture backbones at each transformer block.
|
| 390 |
+
|
| 391 |
+
<table><tr><td>Block Type</td><td>Skill Unit Name</td></tr><tr><td rowspan="8">Decoder</td><td>self_attn.q_prog.lora_A.default.weight</td></tr><tr><td>self_attn.q_prog.lora_B.default.weight</td></tr><tr><td>self_attn.k_prog.lora_A.default.weight</td></tr><tr><td>self_attn.k_prog.lora_B.default.weight</td></tr><tr><td>self_attn.v_prog.lora_A.default.weight</td></tr><tr><td>self_attn.v_prog.lora_B.default.weight</td></tr><tr><td>self_attn.o_prog.lora_A.default.weight</td></tr><tr><td>self_attn.o_prog.lora_B.default.weight</td></tr></table>
|
| 392 |
+
|
| 393 |
+
Nonetheless, the model's performance remains relatively stable across most conditions, indicating a low sensitivity to hyperparameter variations.
|
| 394 |
+
|
| 395 |
+
About the selection of threshold for important skill units, the Table 13 below shows the model's performance with varying thresholds $\delta$ on T5-small.
|
| 396 |
+
|
| 397 |
+
It can be seen that setting a high threshold $(50\%)$ reduces model effectiveness by categorizing less significant skill units as important, which can contaminate historical knowledge and lead to forgetting. Conversely, a $1\%$ threshold still maintains strong performance owing to our effective skill consolidation approach, which effectively preserves task-specific knowledge and prevents forgetting. Considering that the heatmap in Figure 5 displays approximately $20\%$ of the area in darker shades, signifying greater importance, we opted for a $20\%$ threshold to differentiate between important and unimportant skill units.
|
| 398 |
+
|
| 399 |
+
Table 8: Definition of skill units for decoder-only architecture backbones at each transformer block.
|
| 400 |
+
|
| 401 |
+
<table><tr><td>Method</td><td>Memory-Free</td><td>Avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>Fine-tuning</td><td></td><td>20.1</td><td>6.6</td><td>-53.1</td></tr><tr><td>DST-EGQA</td><td>✓</td><td>40.5</td><td>18.4</td><td>-37.1</td></tr><tr><td>TaSL (ours)</td><td></td><td>49.9</td><td>22.0</td><td>-23.8</td></tr><tr><td>Replay</td><td></td><td>47.2</td><td>7.3</td><td>-16.0</td></tr><tr><td>DST-EGQA</td><td>✗</td><td>51.2</td><td>18.5</td><td>-21.9</td></tr></table>
|
| 402 |
+
|
| 403 |
+
Table 9: Cross-dataset performance of TaSL.
|
| 404 |
+
|
| 405 |
+
<table><tr><td>α1,α2</td><td>avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>fine-tuning</td><td>41.6</td><td>9.6</td><td>-36.7</td></tr><tr><td>0.15</td><td>61.8</td><td>29.7</td><td>-10.7</td></tr><tr><td>0.35</td><td>61.2</td><td>30.1</td><td>-12.3</td></tr><tr><td>0.55</td><td>62.8</td><td>28.6</td><td>-9.5</td></tr><tr><td>0.85</td><td>60.7</td><td>28.9</td><td>-10.3</td></tr><tr><td>0.95</td><td>61.7</td><td>30.0</td><td>-10.6</td></tr></table>
|
| 406 |
+
|
| 407 |
+
Table 10: Performance comparisons of TaSL (using T5-small as the backbone) equipped with different $\alpha$ at task order 1.
|
| 408 |
+
|
| 409 |
+
# E Detailed Algorithm
|
| 410 |
+
|
| 411 |
+
In this section, we provide the detailed implementation of TaSL algorithm (see Algorithm 2).
|
| 412 |
+
|
| 413 |
+
<table><tr><td>β</td><td>avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>fine-tuning</td><td>41.6</td><td>9.6</td><td>-36.7</td></tr><tr><td>0.1</td><td>61.8</td><td>29.6</td><td>-11.7</td></tr><tr><td>0.3</td><td>61.5</td><td>28.4</td><td>-11.4</td></tr><tr><td>0.5</td><td>62.3</td><td>29.5</td><td>-10.2</td></tr><tr><td>0.7</td><td>60.7</td><td>28.9</td><td>-10.3</td></tr><tr><td>0.9</td><td>58.2</td><td>30.2</td><td>-13.0</td></tr></table>
|
| 414 |
+
|
| 415 |
+
Table 11: Performance comparisons of TaSL (using T5-small as the backbone) equipped with different $\beta$ at task order 1.
|
| 416 |
+
|
| 417 |
+
<table><tr><td>γ</td><td>avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>fine-tuning</td><td>41.6</td><td>9.6</td><td>-36.7</td></tr><tr><td>0.1</td><td>60.1</td><td>28.2</td><td>-12.1</td></tr><tr><td>0.3</td><td>61.7</td><td>28.8</td><td>-11.2</td></tr><tr><td>0.5</td><td>63.0</td><td>28.4</td><td>-10.5</td></tr><tr><td>0.7</td><td>60.7</td><td>28.9</td><td>-10.3</td></tr><tr><td>0.9</td><td>61.7</td><td>27.4</td><td>-11.5</td></tr></table>
|
| 418 |
+
|
| 419 |
+
Table 12: Performance comparisons of TaSL (using T5-small as the backbone) equipped with different $\gamma$ at task order 1.
|
| 420 |
+
|
| 421 |
+
<table><tr><td>Importance Thresholds δ</td><td>Avg. JGA</td><td>FWT</td><td>BWT</td></tr><tr><td>1%</td><td>62.0</td><td>26.3</td><td>-9.4</td></tr><tr><td>5%</td><td>63.4</td><td>25.8</td><td>-10.1</td></tr><tr><td>10%</td><td>62.2</td><td>26.2</td><td>-9.5</td></tr><tr><td>20%</td><td>62.1</td><td>26.6</td><td>-9.1</td></tr><tr><td>30%</td><td>62.7</td><td>26.5</td><td>-10.0</td></tr><tr><td>40%</td><td>60.9</td><td>24.6</td><td>-10.2</td></tr><tr><td>50%</td><td>54.8</td><td>23.4</td><td>-10.3</td></tr></table>
|
| 422 |
+
|
| 423 |
+
Table 13: Performance comparisons of TaSL (using T5-small as the backbone) equipped with different $\delta$
|
| 424 |
+
|
| 425 |
+
# Algorithm 2 TaSL
|
| 426 |
+
|
| 427 |
+
Input: Dataset $\mathcal{D}_k$ for task $k = 1,\dots ,K$ ; initial pre-trained model $f_{0}$ ; hyperparameters $\beta, \gamma$ .
|
| 428 |
+
|
| 429 |
+
1: # sequential tasks.
|
| 430 |
+
2: for task $k = 1, \dots, K$ do
|
| 431 |
+
3: Get $f_{k}$ and calculate $\mathcal{U}_{k}$ by Algorithm (1);
|
| 432 |
+
4: Calculate $\delta_{k}$ based on $\mathcal{I}(\mathcal{U}_k)$ ;
|
| 433 |
+
5: if $k = 1$ then
|
| 434 |
+
6: #initialization at beginning task.
|
| 435 |
+
7: $\hat{f}_1\gets f_1,\hat{\mathcal{U}}_1\gets \mathcal{U}_1,\hat{\delta}_1\gets \delta_1;$
|
| 436 |
+
8: else
|
| 437 |
+
9: #fine-grained model averaging.
|
| 438 |
+
10: for skill unit $i = 1,\dots ,n$ do
|
| 439 |
+
11: Calculate $\hat{u}_i^k$ by (8);
|
| 440 |
+
12: end for
|
| 441 |
+
13: Get the averaged model $\hat{f}_k$ based on $\mathcal{U}_k$ ;
|
| 442 |
+
14: Calculate accumulated importance score $\mathcal{I}(\hat{\mathcal{U}}_k)$ according to (6);
|
| 443 |
+
15: Calculate $\hat{\delta}_k$ based on $\mathcal{I}(\hat{\mathcal{U}}_k)$ .
|
| 444 |
+
16: end if
|
| 445 |
+
17: end for
|
2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b0ec906d0ab1781e6fd31bbfdfe638eb19e6956ee9dce26232c5e37ff3a76c01
|
| 3 |
+
size 715139
|
2024/TaSL_ Continual Dialog State Tracking via Task Skill Localization and Consolidation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/4a7b8b8a-ac90-4b97-8066-ba54fca5fe43_content_list.json
ADDED
|
@@ -0,0 +1,2363 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Talk With Human-like Agents: Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
115,
|
| 8 |
+
79,
|
| 9 |
+
878,
|
| 10 |
+
118
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Haoqiu Yan\\*1,3, Yongxin Zhu\\*2,3, Kai Zheng1,3,",
|
| 17 |
+
"bbox": [
|
| 18 |
+
292,
|
| 19 |
+
124,
|
| 20 |
+
702,
|
| 21 |
+
143
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Bing Liu $^{4}$ , Haoyu Cao $^{4}$ , Deqiang Jiang $^{4}$ , Linli Xu $^{\\dagger,3}$",
|
| 28 |
+
"bbox": [
|
| 29 |
+
270,
|
| 30 |
+
153,
|
| 31 |
+
724,
|
| 32 |
+
170
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{1}$ School of Computer Science and Technology, University of Science and Technology of China",
|
| 39 |
+
"bbox": [
|
| 40 |
+
115,
|
| 41 |
+
171,
|
| 42 |
+
880,
|
| 43 |
+
187
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "$^{2}$ School of Data Science, University of Science and Technology of China",
|
| 50 |
+
"bbox": [
|
| 51 |
+
203,
|
| 52 |
+
187,
|
| 53 |
+
794,
|
| 54 |
+
204
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "$^{3}$ State Key Laboratory of Cognitive Intelligence, $^{4}$ Tencent Youtu Lab",
|
| 61 |
+
"bbox": [
|
| 62 |
+
216,
|
| 63 |
+
204,
|
| 64 |
+
778,
|
| 65 |
+
219
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "{yanhq,zyx2016,dthdzk} $@$ mail.ustc.edu.cn",
|
| 72 |
+
"bbox": [
|
| 73 |
+
300,
|
| 74 |
+
221,
|
| 75 |
+
695,
|
| 76 |
+
237
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "{billbliu,rechycao,dqiangjiang}@tencent.com linlixu@ustc.edu.cn",
|
| 83 |
+
"bbox": [
|
| 84 |
+
176,
|
| 85 |
+
237,
|
| 86 |
+
818,
|
| 87 |
+
253
|
| 88 |
+
],
|
| 89 |
+
"page_idx": 0
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"text": "Abstract",
|
| 94 |
+
"text_level": 1,
|
| 95 |
+
"bbox": [
|
| 96 |
+
260,
|
| 97 |
+
261,
|
| 98 |
+
339,
|
| 99 |
+
275
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "text",
|
| 105 |
+
"text": "Large Language Model (LLM)-enhanced agents become increasingly prevalent in Human-AI communication, offering vast potential from entertainment to professional domains. However, current multi-modal dialogue systems overlook the acoustic information present in speech, which is crucial for understanding human communication nuances. This oversight can lead to misinterpretations of speakers' intentions, resulting in inconsistent or even contradictory responses within dialogues. To bridge this gap, in this paper, we propose PerceptiveAgent, an empathetic multi-modal dialogue system designed to discern deeper or more subtle meanings beyond the literal interpretations of words through the integration of speech modality perception. Employing LLMs as a cognitive core, PerceptiveAgent perceives acoustic information from input speech and generates empathetic responses based on speaking styles described in natural language. Experimental results indicate that PerceptiveAgent excels in contextual understanding by accurately discerning the speakers' true intentions in scenarios where the linguistic meaning is either contrary to or inconsistent with the speaker's true feelings, producing more nuanced and expressive spoken dialogues. Code is publicly available at: https://github.com/Haoqiu-Yan/PerceptiveAgent.",
|
| 106 |
+
"bbox": [
|
| 107 |
+
141,
|
| 108 |
+
285,
|
| 109 |
+
460,
|
| 110 |
+
725
|
| 111 |
+
],
|
| 112 |
+
"page_idx": 0
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"type": "text",
|
| 116 |
+
"text": "1 Introduction",
|
| 117 |
+
"text_level": 1,
|
| 118 |
+
"bbox": [
|
| 119 |
+
114,
|
| 120 |
+
734,
|
| 121 |
+
258,
|
| 122 |
+
750
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "Artificial Intelligence (AI) agents (Russell and Norvig, 2010; Negnevitsky, 2005) are entities designed to replicate human-like intelligence and functionalities, serving as the essential building blocks of AI systems. An ideal agent should be capable of perceiving its environment with sensors, making informed decisions, and then taking actions in response to users or scenarios. Recently,",
|
| 129 |
+
"bbox": [
|
| 130 |
+
112,
|
| 131 |
+
759,
|
| 132 |
+
489,
|
| 133 |
+
888
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "image",
|
| 139 |
+
"img_path": "images/499c8dd57bdbe75645110da087d80be5d27b5a8314f0dbe705521ae217a47fb9.jpg",
|
| 140 |
+
"image_caption": [
|
| 141 |
+
"Figure 1: Examples illustrating the definition of empathy within dialogues."
|
| 142 |
+
],
|
| 143 |
+
"image_footnote": [],
|
| 144 |
+
"bbox": [
|
| 145 |
+
522,
|
| 146 |
+
268,
|
| 147 |
+
915,
|
| 148 |
+
406
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "text",
|
| 154 |
+
"text": "Large Language Models (LLMs) (Wei et al., 2022; Shanahan, 2024; Taylor et al., 2022) have exhibited remarkable capabilities in diverse tasks, offering opportunities for building general AI agents that engage in human-like interactions, such as virtual assistants and intelligent robots. However, current text-only dialogue systems (Peng et al., 2023; Touvron et al., 2023) fall short in bridging the gap between experimental and realistic scenarios, where humans perceive and understand the world through diverse multi-modal information. Thus, the integration of acoustic information into dialogues has the potential to foster the development of more human-like agents, thereby enhancing the empathetic experience they offer.",
|
| 155 |
+
"bbox": [
|
| 156 |
+
507,
|
| 157 |
+
485,
|
| 158 |
+
884,
|
| 159 |
+
726
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 0
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "text",
|
| 165 |
+
"text": "Empathetic responses involve two essential aspects: cognitive and affective empathy (Cuff et al., 2016; Kim et al., 2021; Reis et al., 2011; Smith, 2006), which reflect an understanding of the human-talker's thoughts and feelings respectively. Specifically, cognitive empathy involves understanding the human-talker's thoughts, perspectives, and described events, enabling the agent to provide responses relevant to the dialogue topic (Sabour et al., 2022). Conversely, affective empathy entails responding based on observed emotional expressions in the dialogue history, contributing to the nat",
|
| 166 |
+
"bbox": [
|
| 167 |
+
507,
|
| 168 |
+
728,
|
| 169 |
+
885,
|
| 170 |
+
921
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 0
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "page_footnote",
|
| 176 |
+
"text": "*Equal contribution.",
|
| 177 |
+
"bbox": [
|
| 178 |
+
134,
|
| 179 |
+
894,
|
| 180 |
+
262,
|
| 181 |
+
906
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 0
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "page_footnote",
|
| 187 |
+
"text": "† Corresponding author.",
|
| 188 |
+
"bbox": [
|
| 189 |
+
136,
|
| 190 |
+
906,
|
| 191 |
+
280,
|
| 192 |
+
920
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 0
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "page_number",
|
| 198 |
+
"text": "15009",
|
| 199 |
+
"bbox": [
|
| 200 |
+
475,
|
| 201 |
+
927,
|
| 202 |
+
524,
|
| 203 |
+
940
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 0
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "footer",
|
| 209 |
+
"text": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15009-15022 August 11-16, 2024 ©2024 Association for Computational Linguistics",
|
| 210 |
+
"bbox": [
|
| 211 |
+
80,
|
| 212 |
+
945,
|
| 213 |
+
915,
|
| 214 |
+
973
|
| 215 |
+
],
|
| 216 |
+
"page_idx": 0
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"type": "text",
|
| 220 |
+
"text": "uralness of synthesized speech (Cong et al., 2021; Guo et al., 2021; Nishimura et al., 2022). While recent works (Saito et al., 2023; Nguyen et al., 2022; Mitsui et al., 2023) leverage LLM's strong capabilities of contextual understanding and content generation to synthesize empathetic speeches, there remains a discrepancy between cognitive and affective empathy. This arises because cognitive content is preassigned before affective speech is deduced from latent representations of multi-modal dialogue history.",
|
| 221 |
+
"bbox": [
|
| 222 |
+
112,
|
| 223 |
+
84,
|
| 224 |
+
489,
|
| 225 |
+
261
|
| 226 |
+
],
|
| 227 |
+
"page_idx": 1
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"type": "text",
|
| 231 |
+
"text": "Recently, advancements in multi-modal content perception and generation have been achieved by various methods (Zhang et al., 2023; Huang et al., 2024; Chen et al., 2023; Wu et al., 2023), where audio is represented as either recognized text with an automatic speech recognition model or discrete features with a speech encoder. However, while linguistic information in speech is predominantly captured by both discrete acoustic units and textual representations, acoustic features tend to be disregarded. This oversight can lead to misinterpretations of the speaker's intentions, resulting in discrepant or even contradictory responses within the dialogue history. As illustrated in Figure 1, the left scenario fails to consider the perspective of the listener while the right one barely understands or empathizes with the speaker's feelings.",
|
| 232 |
+
"bbox": [
|
| 233 |
+
115,
|
| 234 |
+
263,
|
| 235 |
+
489,
|
| 236 |
+
535
|
| 237 |
+
],
|
| 238 |
+
"page_idx": 1
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"type": "text",
|
| 242 |
+
"text": "In this paper, we propose PerceptiveAgent, an empathetic multi-modal dialogue system that can discern deeper or more subtle meanings beyond the literal interpretations of words, based on speaking styles described in natural language. Specifically, PerceptiveAgent first comprehends the speaker's intentions accurately by a perceptive captioner model that captures acoustic features from each speech within dialogues. Subsequently, an LLM module acts as the cognitive core, producing the relevant response content with a caption describing how to articulate the response. A Multi-Speaker and Multi-Attribute Synthesizer (MSMA-Synthesizer) is then developed to synthesize nuanced and expressive speech.",
|
| 243 |
+
"bbox": [
|
| 244 |
+
112,
|
| 245 |
+
538,
|
| 246 |
+
489,
|
| 247 |
+
778
|
| 248 |
+
],
|
| 249 |
+
"page_idx": 1
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"type": "text",
|
| 253 |
+
"text": "Our contributions include the following:",
|
| 254 |
+
"bbox": [
|
| 255 |
+
131,
|
| 256 |
+
778,
|
| 257 |
+
433,
|
| 258 |
+
796
|
| 259 |
+
],
|
| 260 |
+
"page_idx": 1
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"type": "list",
|
| 264 |
+
"sub_type": "text",
|
| 265 |
+
"list_items": [
|
| 266 |
+
"- We pioneer the construction of a speech captioner model to perceive and express acoustic information through natural language.",
|
| 267 |
+
"- We develop an empathetic multi-modal dialogue system capable of identifying the speaker's true intentions through audio modal"
|
| 268 |
+
],
|
| 269 |
+
"bbox": [
|
| 270 |
+
132,
|
| 271 |
+
810,
|
| 272 |
+
489,
|
| 273 |
+
921
|
| 274 |
+
],
|
| 275 |
+
"page_idx": 1
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"type": "text",
|
| 279 |
+
"text": "ity perception and generating empathetic speech.",
|
| 280 |
+
"bbox": [
|
| 281 |
+
544,
|
| 282 |
+
84,
|
| 283 |
+
880,
|
| 284 |
+
116
|
| 285 |
+
],
|
| 286 |
+
"page_idx": 1
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"type": "text",
|
| 290 |
+
"text": "- Experiments demonstrate that PerceptiveAgent can accurately discern the true intentions in scenarios where the literal interpretations of words are either contrary to or inconsistent with the speaker's true feelings.",
|
| 291 |
+
"bbox": [
|
| 292 |
+
529,
|
| 293 |
+
130,
|
| 294 |
+
884,
|
| 295 |
+
227
|
| 296 |
+
],
|
| 297 |
+
"page_idx": 1
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"type": "text",
|
| 301 |
+
"text": "2 Related Work",
|
| 302 |
+
"text_level": 1,
|
| 303 |
+
"bbox": [
|
| 304 |
+
509,
|
| 305 |
+
241,
|
| 306 |
+
665,
|
| 307 |
+
256
|
| 308 |
+
],
|
| 309 |
+
"page_idx": 1
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"type": "text",
|
| 313 |
+
"text": "2.1 Multi-modal Dialogue Systems",
|
| 314 |
+
"text_level": 1,
|
| 315 |
+
"bbox": [
|
| 316 |
+
509,
|
| 317 |
+
268,
|
| 318 |
+
796,
|
| 319 |
+
284
|
| 320 |
+
],
|
| 321 |
+
"page_idx": 1
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"type": "text",
|
| 325 |
+
"text": "Recent advances in multi-modal dialogue systems have primarily focused on transforming speech into discrete latent representation. For instance, Zhang et al. (2023); Chen et al. (2023); Wu et al. (2023) utilize speech encoders to perceive speech and then synthesize responses according to discrete acoustic units derived from LLMs, showing intrinsic cross-modal conversational abilities. Besides, works including (Nguyen et al., 2022; Mitsui et al., 2023) autonomously generate two-channel spoken dialogues, simulating realistic interactions between agents, including vocal interactions, laughter, and turn-taking. However, while discrete acoustic units capture linguistic information effectively, prosodic features are mostly ignored. To address this limitation and preserve prosodic information as much as possible, we develop a multi-modal dialog system that perceives prosody through speech captioning and responds empathetically using an LLM and a speech synthesizer.",
|
| 326 |
+
"bbox": [
|
| 327 |
+
507,
|
| 328 |
+
290,
|
| 329 |
+
884,
|
| 330 |
+
612
|
| 331 |
+
],
|
| 332 |
+
"page_idx": 1
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"type": "text",
|
| 336 |
+
"text": "2.2 Cross-Modal Text Generation",
|
| 337 |
+
"text_level": 1,
|
| 338 |
+
"bbox": [
|
| 339 |
+
507,
|
| 340 |
+
626,
|
| 341 |
+
791,
|
| 342 |
+
640
|
| 343 |
+
],
|
| 344 |
+
"page_idx": 1
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"text": "Cross-modal text generation involves generating text conditioned on other modalities such as audio and vision (Li et al., 2022; Liu et al., 2024; Zhang et al., 2024), where the key challenge is to align multi-modal features with the text latent space. Recent approaches (Zhu et al., 2023; Chen et al., 2023) address this challenge by aligning off-the-shelf pre-trained LLMs with learnable visual encoders (Li et al., 2023; Zhao et al., 2023), transforming multi-modal representations as learnable query embeddings while keeping both pre-trained LLMs and visual encoders frozen. Similarly, for audio caption tasks, audio embeddings are mapped to a sequence of prefix vectors and then taken as the context input for caption generation (Kim et al., 2023; Schaumlöffel et al., 2023; Xu et al., 2024). However, to the best of our knowledge, we are",
|
| 349 |
+
"bbox": [
|
| 350 |
+
507,
|
| 351 |
+
646,
|
| 352 |
+
884,
|
| 353 |
+
921
|
| 354 |
+
],
|
| 355 |
+
"page_idx": 1
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "footer",
|
| 359 |
+
"text": "15010",
|
| 360 |
+
"bbox": [
|
| 361 |
+
477,
|
| 362 |
+
927,
|
| 363 |
+
526,
|
| 364 |
+
939
|
| 365 |
+
],
|
| 366 |
+
"page_idx": 1
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "page_number",
|
| 370 |
+
"text": "2",
|
| 371 |
+
"bbox": [
|
| 372 |
+
492,
|
| 373 |
+
942,
|
| 374 |
+
504,
|
| 375 |
+
954
|
| 376 |
+
],
|
| 377 |
+
"page_idx": 1
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "image",
|
| 381 |
+
"img_path": "images/d7ea11bb656ee10d6c8c3c961bb72905d32a26d2d4e5a25a7f7f24d9fdd4a8a8.jpg",
|
| 382 |
+
"image_caption": [
|
| 383 |
+
"Figure 2: The overall architecture of PerceptiveAgent. Three components are interconnected: the speech captioner, the LLM and the MSMA-Synthesizer. The speech captioner serves as a multi-modal sensory system, perceiving acoustic information from the dialogue history, which is crucial for discerning the speakers' intentions. The LLM acts as the cognitive core, responsible for comprehending the speakers' thoughts and emotions. Conditioned on the response contents and multiple attributes provided by the LLM, the MSMA-Synthesizer generates expressive speech outputs."
|
| 384 |
+
],
|
| 385 |
+
"image_footnote": [],
|
| 386 |
+
"bbox": [
|
| 387 |
+
126,
|
| 388 |
+
54,
|
| 389 |
+
880,
|
| 390 |
+
294
|
| 391 |
+
],
|
| 392 |
+
"page_idx": 2
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"type": "text",
|
| 396 |
+
"text": "the first to construct a speech captioner capable of perceiving acoustic information in dialogues.",
|
| 397 |
+
"bbox": [
|
| 398 |
+
112,
|
| 399 |
+
423,
|
| 400 |
+
487,
|
| 401 |
+
455
|
| 402 |
+
],
|
| 403 |
+
"page_idx": 2
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"type": "text",
|
| 407 |
+
"text": "2.3 Expressive Text-to-Speech Synthesis",
|
| 408 |
+
"text_level": 1,
|
| 409 |
+
"bbox": [
|
| 410 |
+
112,
|
| 411 |
+
469,
|
| 412 |
+
443,
|
| 413 |
+
483
|
| 414 |
+
],
|
| 415 |
+
"page_idx": 2
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"type": "text",
|
| 419 |
+
"text": "Given a transcript, text-to-speech (TTS) models achieve voice variability by conditioning on a zero-shot speech prompt or a text prompt of the desired style. For instance, zero-shot TTS systems reproduce the speaker characteristics and acoustic environments of a speech prompt through in-context learning (Wu et al., 2022; Wang et al., 2023; Shen et al., 2023; Le et al., 2023). However, these systems lack independent control over speaking styles, including prosody, emotion, and acoustic environment. To address this, text prompts have been introduced for more natural and general speech synthesis. Approaches like (Guo et al., 2023; Leng et al., 2023; Shimizu et al., 2023; Ji et al., 2023) express speaking styles in natural language, while methods such as (Polyak et al., 2021; Nguyen et al., 2023) utilize explicit labels to generate diverse speech that matches the prompt. We follow the latter direction and construct a speech synthesis model with multiple speaking style labels.",
|
| 420 |
+
"bbox": [
|
| 421 |
+
112,
|
| 422 |
+
491,
|
| 423 |
+
489,
|
| 424 |
+
815
|
| 425 |
+
],
|
| 426 |
+
"page_idx": 2
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"type": "text",
|
| 430 |
+
"text": "3 Methods",
|
| 431 |
+
"text_level": 1,
|
| 432 |
+
"bbox": [
|
| 433 |
+
112,
|
| 434 |
+
828,
|
| 435 |
+
225,
|
| 436 |
+
843
|
| 437 |
+
],
|
| 438 |
+
"page_idx": 2
|
| 439 |
+
},
|
| 440 |
+
{
|
| 441 |
+
"type": "text",
|
| 442 |
+
"text": "As a multimodal dialog system, PerceptiveAgent is capable of audio modality perception and empathetic speech generation, which is achieved through the incorporation of prosodic informa",
|
| 443 |
+
"bbox": [
|
| 444 |
+
112,
|
| 445 |
+
857,
|
| 446 |
+
489,
|
| 447 |
+
921
|
| 448 |
+
],
|
| 449 |
+
"page_idx": 2
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"type": "text",
|
| 453 |
+
"text": "tion expressed in natural language. To capture prosodic features from speech inputs, we propose a novel speech caption model that aligns audio features with the latent space of a pre-trained language model. To enhance empathy and diversity of the simulated speech communication, a multispeaker and multi-attribute vocoder is developed. This vocoder synthesizes speech by conditioning on both response contents and captions of speaking styles, resulting in more engaging and realistic dialogues.",
|
| 454 |
+
"bbox": [
|
| 455 |
+
507,
|
| 456 |
+
423,
|
| 457 |
+
884,
|
| 458 |
+
600
|
| 459 |
+
],
|
| 460 |
+
"page_idx": 2
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"type": "text",
|
| 464 |
+
"text": "3.1 Speech Captioner",
|
| 465 |
+
"text_level": 1,
|
| 466 |
+
"bbox": [
|
| 467 |
+
507,
|
| 468 |
+
640,
|
| 469 |
+
695,
|
| 470 |
+
657
|
| 471 |
+
],
|
| 472 |
+
"page_idx": 2
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"type": "text",
|
| 476 |
+
"text": "The speech caption model is designed to capture prosodic information and transcribe it as textual descriptions. It operates by encoding speech inputs by the speech encoder in ImageBind (Girdhar et al., 2023), followed by description generation by the pre-trained GPT-2 decoder (Radford et al., 2019). To bridge the gap between the speech encoder and the text decoder, we introduce a Querying Transformer (Q-former) pre-trained in BuboGPT (Zhao et al., 2023). This model is connected with a linear projection layer, which is subsequently followed by a text decoder. To effectively fine-tune this model, we integrate the following two fine-tuning strategies, while keeping the speech encoder frozen throughout the training procedure.",
|
| 477 |
+
"bbox": [
|
| 478 |
+
507,
|
| 479 |
+
678,
|
| 480 |
+
884,
|
| 481 |
+
921
|
| 482 |
+
],
|
| 483 |
+
"page_idx": 2
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"type": "footer",
|
| 487 |
+
"text": "15011",
|
| 488 |
+
"bbox": [
|
| 489 |
+
477,
|
| 490 |
+
927,
|
| 491 |
+
522,
|
| 492 |
+
939
|
| 493 |
+
],
|
| 494 |
+
"page_idx": 2
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"type": "page_number",
|
| 498 |
+
"text": "3",
|
| 499 |
+
"bbox": [
|
| 500 |
+
492,
|
| 501 |
+
941,
|
| 502 |
+
504,
|
| 503 |
+
953
|
| 504 |
+
],
|
| 505 |
+
"page_idx": 2
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"type": "text",
|
| 509 |
+
"text": "3.1.1 Multi-modal Embedding Alignment",
|
| 510 |
+
"text_level": 1,
|
| 511 |
+
"bbox": [
|
| 512 |
+
112,
|
| 513 |
+
84,
|
| 514 |
+
455,
|
| 515 |
+
99
|
| 516 |
+
],
|
| 517 |
+
"page_idx": 3
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"type": "text",
|
| 521 |
+
"text": "Prefix tuning is utilized to align the output of the Q-former with the latent space of the text decoder. A query vector with fixed dimensions is generated by the Q-former. These embeddings interact with each other through self-attention layers and with frozen audio features through cross-attention layers. To bridge the gap with the word embedding space, query embeddings are used as prefix vectors and attended to by the text decoder. This bottleneck architecture serves to compel the queries to extract the acoustic information that is most relevant to the textual descriptions.",
|
| 522 |
+
"bbox": [
|
| 523 |
+
112,
|
| 524 |
+
104,
|
| 525 |
+
489,
|
| 526 |
+
297
|
| 527 |
+
],
|
| 528 |
+
"page_idx": 3
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"type": "text",
|
| 532 |
+
"text": "3.1.2 Instruction Tuning",
|
| 533 |
+
"text_level": 1,
|
| 534 |
+
"bbox": [
|
| 535 |
+
112,
|
| 536 |
+
306,
|
| 537 |
+
324,
|
| 538 |
+
322
|
| 539 |
+
],
|
| 540 |
+
"page_idx": 3
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"type": "text",
|
| 544 |
+
"text": "To bridge the gap between the next-word prediction objective of the pre-trained decoder and the objective of acquiring multi-modal information conditioned on prefix sequences, instruction tuning is employed to train the speech captioner. We first construct an instructional dataset, where each instance comprises three elements: a query vector, an instruction, and a caption. The instruction is described as a natural language text sequence that specifies the task, serving to constrain the model's outputs to align with desired response characteristics or domain knowledge. This provides a channel for humans to intervene with the model's behaviors. Varied instructions are gathered using GPT-3.5-Turbo in this work. Additionally, the caption represents the desired output following the instruction, while the query vector is derived from acoustic representations. Throughout the training procedure, the parameters of the speech encoder are fixed, while the Q-former and text decoder remain trainable. During each inference process, instructions are randomly selected and incorporated into the generated sequence to enhance diversity and simulate human cognitive processes more effectively, thereby yielding more varied outputs.",
|
| 545 |
+
"bbox": [
|
| 546 |
+
112,
|
| 547 |
+
326,
|
| 548 |
+
489,
|
| 549 |
+
728
|
| 550 |
+
],
|
| 551 |
+
"page_idx": 3
|
| 552 |
+
},
|
| 553 |
+
{
|
| 554 |
+
"type": "text",
|
| 555 |
+
"text": "3.2 PerceptiveAgent",
|
| 556 |
+
"text_level": 1,
|
| 557 |
+
"bbox": [
|
| 558 |
+
112,
|
| 559 |
+
739,
|
| 560 |
+
290,
|
| 561 |
+
755
|
| 562 |
+
],
|
| 563 |
+
"page_idx": 3
|
| 564 |
+
},
|
| 565 |
+
{
|
| 566 |
+
"type": "text",
|
| 567 |
+
"text": "Figure 2 illustrates the overall framework of PerceptiveAgent, a multi-modal dialogue system comprising three interconnected stages: Intention Discerning by the speech captioner, Comprehension through Sensory Integration by the LLM and Expressive Speech Synthesis by the MSMA-Synthesizer. PerceptiveAgent exhibits two key characteristics: (1) It leverages natural language to perceive and express acoustic information, and (2) It employs an LLM as the cognitive core in",
|
| 568 |
+
"bbox": [
|
| 569 |
+
112,
|
| 570 |
+
760,
|
| 571 |
+
489,
|
| 572 |
+
921
|
| 573 |
+
],
|
| 574 |
+
"page_idx": 3
|
| 575 |
+
},
|
| 576 |
+
{
|
| 577 |
+
"type": "text",
|
| 578 |
+
"text": "the system, to comprehend multi-modal contextual history and deliver audio responses.",
|
| 579 |
+
"bbox": [
|
| 580 |
+
507,
|
| 581 |
+
84,
|
| 582 |
+
880,
|
| 583 |
+
116
|
| 584 |
+
],
|
| 585 |
+
"page_idx": 3
|
| 586 |
+
},
|
| 587 |
+
{
|
| 588 |
+
"type": "text",
|
| 589 |
+
"text": "3.2.1 Caption for Intention Discerning",
|
| 590 |
+
"text_level": 1,
|
| 591 |
+
"bbox": [
|
| 592 |
+
507,
|
| 593 |
+
124,
|
| 594 |
+
826,
|
| 595 |
+
140
|
| 596 |
+
],
|
| 597 |
+
"page_idx": 3
|
| 598 |
+
},
|
| 599 |
+
{
|
| 600 |
+
"type": "text",
|
| 601 |
+
"text": "In the initial stage, a speech caption model is employed to interpret acoustic information from audio inputs. Each speech within the dialogue history is encoded into latent features by a frozen speech encoder. These features are then compressed into a query vector with fixed dimensions, sharing the same latent space as the word embedding of a text decoder. Conditioned on this query sequence and instruction prompt, a textual caption describing the speaking styles for each speech is deduced by the text decoder.",
|
| 602 |
+
"bbox": [
|
| 603 |
+
507,
|
| 604 |
+
143,
|
| 605 |
+
884,
|
| 606 |
+
319
|
| 607 |
+
],
|
| 608 |
+
"page_idx": 3
|
| 609 |
+
},
|
| 610 |
+
{
|
| 611 |
+
"type": "text",
|
| 612 |
+
"text": "3.2.2 Comprehension through Sensory Integration",
|
| 613 |
+
"text_level": 1,
|
| 614 |
+
"bbox": [
|
| 615 |
+
507,
|
| 616 |
+
328,
|
| 617 |
+
828,
|
| 618 |
+
360
|
| 619 |
+
],
|
| 620 |
+
"page_idx": 3
|
| 621 |
+
},
|
| 622 |
+
{
|
| 623 |
+
"type": "text",
|
| 624 |
+
"text": "Subsequently, an LLM module acting as the cognitive core is integrated into the system, where GPT-3.5-Turbo is employed. The transcribed textual content for each audio is merged with the previously generated caption before being fed into the LLM. Prompts in Appendix A and B are designed to effectively leverage the LLM's contextual understanding abilities. Upon recognizing speakers' intentions by assimilating both the contextual caption and content, the LLM deduces the relevant dialogue content and generates a caption describing how to articulate the derived content.",
|
| 625 |
+
"bbox": [
|
| 626 |
+
507,
|
| 627 |
+
363,
|
| 628 |
+
884,
|
| 629 |
+
555
|
| 630 |
+
],
|
| 631 |
+
"page_idx": 3
|
| 632 |
+
},
|
| 633 |
+
{
|
| 634 |
+
"type": "text",
|
| 635 |
+
"text": "3.2.3 Expressive Speech Synthesis",
|
| 636 |
+
"text_level": 1,
|
| 637 |
+
"bbox": [
|
| 638 |
+
507,
|
| 639 |
+
564,
|
| 640 |
+
793,
|
| 641 |
+
580
|
| 642 |
+
],
|
| 643 |
+
"page_idx": 3
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "text",
|
| 647 |
+
"text": "Finally, empathetic audio responses are synthesized by the MSMA-Synthesizer, a Multi-Speaker and Multi-Attribute vocoder that is conditioned on the generated dialogue contents and captions. This vocoder is a modification of (Nguyen et al., 2023) to facilitate fine control over speech expressiveness. In addition to taking discrete speech units, speaker and style (emotion) as inputs, our vocoder introduces multiple prosodic attributes, including pitch, speed and energy. To synthesize each inference, the LLM's outputs of dialogue contents and captions are transformed into discrete units or attribute labels respectively, before being fed into the vocoder. Specifically, a text-to-unit (T2U) model is utilized to convert response contents into acoustic units with a Transformer machine translation structure (Vaswani et al., 2017). Emotional and prosodic labels are recognized from response captions by sentence classifiers, accomplished with GPT-3.5-Turbo in this work, while the speaker label is randomly selected.",
|
| 648 |
+
"bbox": [
|
| 649 |
+
507,
|
| 650 |
+
583,
|
| 651 |
+
884,
|
| 652 |
+
920
|
| 653 |
+
],
|
| 654 |
+
"page_idx": 3
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"type": "footer",
|
| 658 |
+
"text": "15012",
|
| 659 |
+
"bbox": [
|
| 660 |
+
475,
|
| 661 |
+
927,
|
| 662 |
+
524,
|
| 663 |
+
939
|
| 664 |
+
],
|
| 665 |
+
"page_idx": 3
|
| 666 |
+
},
|
| 667 |
+
{
|
| 668 |
+
"type": "page_number",
|
| 669 |
+
"text": "4",
|
| 670 |
+
"bbox": [
|
| 671 |
+
492,
|
| 672 |
+
942,
|
| 673 |
+
504,
|
| 674 |
+
953
|
| 675 |
+
],
|
| 676 |
+
"page_idx": 3
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"text": "The architecture of the vocoder comprises a speaker embedder, an attribute embedder and a HIFIGAN vocoder. The speaker embedder uses look-up tables to embed speaker identities, while a set of controllable attributes including speed, emotion, energy and pitch are embedded by the attribute embedder. To synthesize expressive speech, discrete units are initially embedded and up-sampled through a series of blocks consisting of transposed convolution and a residual block with dilated layers. Prior to duration prediction, this up-sampled sequence is concatenated with the speed embedding. The speaker embedding and style embedding are subsequently concatenated to each frame in the up-sampled sequence, which is transformed to a mel-spectrogram by the HiFiGAN generator.",
|
| 681 |
+
"bbox": [
|
| 682 |
+
112,
|
| 683 |
+
84,
|
| 684 |
+
492,
|
| 685 |
+
343
|
| 686 |
+
],
|
| 687 |
+
"page_idx": 4
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "text",
|
| 691 |
+
"text": "4 Experiments",
|
| 692 |
+
"text_level": 1,
|
| 693 |
+
"bbox": [
|
| 694 |
+
112,
|
| 695 |
+
355,
|
| 696 |
+
260,
|
| 697 |
+
372
|
| 698 |
+
],
|
| 699 |
+
"page_idx": 4
|
| 700 |
+
},
|
| 701 |
+
{
|
| 702 |
+
"type": "text",
|
| 703 |
+
"text": "4.1 Experimental Setup",
|
| 704 |
+
"text_level": 1,
|
| 705 |
+
"bbox": [
|
| 706 |
+
112,
|
| 707 |
+
382,
|
| 708 |
+
317,
|
| 709 |
+
398
|
| 710 |
+
],
|
| 711 |
+
"page_idx": 4
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "text",
|
| 715 |
+
"text": "Datasets. We train our speech captioner on the TextTrolSpeech (Ji et al., 2023) dataset, which consists of 236,220 pairs of captions and the corresponding speech samples. The captions in this dataset describe speaking styles in terms of five factors: gender, emotion, pitch, speed and energy.",
|
| 716 |
+
"bbox": [
|
| 717 |
+
112,
|
| 718 |
+
404,
|
| 719 |
+
489,
|
| 720 |
+
500
|
| 721 |
+
],
|
| 722 |
+
"page_idx": 4
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"text": "For the MSMA-Synthesizer, we reproduce a vocoder proposed in (Nguyen et al., 2023) using the EXPRESSO, LJSpeech (Ito and Johnson, 2017) and VCTK (Yamagishi et al., 2019) datasets. The EXPRESSO dataset is subsequently labeled by the speech captioner and GPT-3.5-Turbo to recognize attributes of pitch, speed and energy for each speech. We then utilize this labeled EXPRESSO dataset and the reproduced vocoder to train the MSMA-Synthesizer. We refer to the reading and conversation sections of EXPRESSO as Exp-R and Exp-I respectively. Additionally, a T2U model is trained on the same datasets with the MSMA-Synthesizer to maintain consistency in unit distribution.",
|
| 727 |
+
"bbox": [
|
| 728 |
+
112,
|
| 729 |
+
502,
|
| 730 |
+
489,
|
| 731 |
+
741
|
| 732 |
+
],
|
| 733 |
+
"page_idx": 4
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "text",
|
| 737 |
+
"text": "To evaluate the overall performance of our system, we utilize a speech dialogue dataset from MELD (Poria et al., 2019). This dataset provides emotion labels for each sentence, which serve as ground truth labels for both response content and speech evaluation. The speeches in this dataset are recorded in realistic scenes with interruptions and environmental noise. In our evaluation, we only consider conversations with two speakers.",
|
| 738 |
+
"bbox": [
|
| 739 |
+
112,
|
| 740 |
+
743,
|
| 741 |
+
489,
|
| 742 |
+
888
|
| 743 |
+
],
|
| 744 |
+
"page_idx": 4
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "text",
|
| 748 |
+
"text": "We utilize English datasets throughout the entire training process. As a consequence, Percep",
|
| 749 |
+
"bbox": [
|
| 750 |
+
112,
|
| 751 |
+
889,
|
| 752 |
+
489,
|
| 753 |
+
921
|
| 754 |
+
],
|
| 755 |
+
"page_idx": 4
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "text",
|
| 759 |
+
"text": "tiveAgent currently supports only the English language. However, it is noteworthy that PerceptiveAgent can be readily expanded to accommodate multiple languages. Only the MSMA-Synthesizer module requires modification, as the language-agnostic nature of the speech captioner allows it to generate captions from various languages. Meanwhile, existing methods can recognize semantic contents and translate them into English.",
|
| 760 |
+
"bbox": [
|
| 761 |
+
507,
|
| 762 |
+
84,
|
| 763 |
+
884,
|
| 764 |
+
227
|
| 765 |
+
],
|
| 766 |
+
"page_idx": 4
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "text",
|
| 770 |
+
"text": "Configurations. We utilize the speech encoder in ImageBind (Girdhar et al., 2023), the pre-trained Q-former in BuboGPT (Zhao et al., 2023), and the pre-trained GPT-2 (Radford et al., 2019) to implement the speech captioner. Finetuning is conducted for 43,000 steps with a batch size of 16. For decoding, we use Top-k sampling with $k = 10$ and set the minimum and maximum sequence lengths to 20 and 50, respectively. We reproduce the vocoder for 400,000 steps with a batch size of 32 and learning rate of 0.0004, and train the MSMA-Synthesizer for 200,000 steps with a batch size of 32 and learning rate of 0.0004. The T2U model is structured as a sequence-to-sequence transformer with 4 encoder layers, 4 decoder layers, and 4 attention heads, with a dropout of 0.1. We utilize HuBERT (Hsu et al., 2021) with 2000 clusters to acquire units as targets<sup>1</sup>, provided by the textlesslib toolbox (Kharitonov et al., 2022). Decoding is performed using Topk sampling with $k = 10$ . All experiments are conducted on 4 NVIDIA GeForce RTX 4090 GPUs.",
|
| 771 |
+
"bbox": [
|
| 772 |
+
507,
|
| 773 |
+
229,
|
| 774 |
+
884,
|
| 775 |
+
567
|
| 776 |
+
],
|
| 777 |
+
"page_idx": 4
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"text": "4.2 Evaluation",
|
| 782 |
+
"text_level": 1,
|
| 783 |
+
"bbox": [
|
| 784 |
+
507,
|
| 785 |
+
580,
|
| 786 |
+
640,
|
| 787 |
+
594
|
| 788 |
+
],
|
| 789 |
+
"page_idx": 4
|
| 790 |
+
},
|
| 791 |
+
{
|
| 792 |
+
"type": "text",
|
| 793 |
+
"text": "Speech-GPT3.5. We implement Speech-GPT3.5, a dialogue system focusing solely on linguistic information as a baseline. According to the textual history content recognized from the speech input, this system comprehends dialogue context with GPT-3.5-Turbo. After generating the response content, the audio response is synthesized by an off-the-shelf TTS (text-to-speech) model provided by OpenAI<sup>2</sup>.",
|
| 794 |
+
"bbox": [
|
| 795 |
+
507,
|
| 796 |
+
601,
|
| 797 |
+
884,
|
| 798 |
+
745
|
| 799 |
+
],
|
| 800 |
+
"page_idx": 4
|
| 801 |
+
},
|
| 802 |
+
{
|
| 803 |
+
"type": "text",
|
| 804 |
+
"text": "Metrics. The performance of PerceptiveAgent is evaluated in terms of two fundamental aspects: 1) cognitive empathy demonstrates the ability to consider the perspective of speakers, reflected in the content of the response; and 2) affective empathy exhibits the ability to emotionally understand and share the speaker's feelings, reflected in the",
|
| 805 |
+
"bbox": [
|
| 806 |
+
507,
|
| 807 |
+
747,
|
| 808 |
+
882,
|
| 809 |
+
859
|
| 810 |
+
],
|
| 811 |
+
"page_idx": 4
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"type": "page_footnote",
|
| 815 |
+
"text": "$^{1}$ https://dl.fbaipublicfiles.com/hubert/hubert_base_ln960.pt",
|
| 816 |
+
"bbox": [
|
| 817 |
+
507,
|
| 818 |
+
870,
|
| 819 |
+
880,
|
| 820 |
+
894
|
| 821 |
+
],
|
| 822 |
+
"page_idx": 4
|
| 823 |
+
},
|
| 824 |
+
{
|
| 825 |
+
"type": "page_footnote",
|
| 826 |
+
"text": "2https://platform.openai.com/docs/guides/text-to-speech",
|
| 827 |
+
"bbox": [
|
| 828 |
+
507,
|
| 829 |
+
895,
|
| 830 |
+
842,
|
| 831 |
+
919
|
| 832 |
+
],
|
| 833 |
+
"page_idx": 4
|
| 834 |
+
},
|
| 835 |
+
{
|
| 836 |
+
"type": "footer",
|
| 837 |
+
"text": "15013",
|
| 838 |
+
"bbox": [
|
| 839 |
+
475,
|
| 840 |
+
927,
|
| 841 |
+
524,
|
| 842 |
+
940
|
| 843 |
+
],
|
| 844 |
+
"page_idx": 4
|
| 845 |
+
},
|
| 846 |
+
{
|
| 847 |
+
"type": "page_number",
|
| 848 |
+
"text": "5",
|
| 849 |
+
"bbox": [
|
| 850 |
+
492,
|
| 851 |
+
942,
|
| 852 |
+
504,
|
| 853 |
+
953
|
| 854 |
+
],
|
| 855 |
+
"page_idx": 4
|
| 856 |
+
},
|
| 857 |
+
{
|
| 858 |
+
"type": "table",
|
| 859 |
+
"img_path": "images/de8342486705428f037485014911230d490868f5bdcd452500ed4b8ee663cd37.jpg",
|
| 860 |
+
"table_caption": [],
|
| 861 |
+
"table_footnote": [],
|
| 862 |
+
"table_body": "<table><tr><td></td><td>BERTScore</td><td>Accuracy</td></tr><tr><td>Speech-GPT3.5</td><td>53.03±10.20</td><td>0.74</td></tr><tr><td>PerceptiveAgent</td><td>54.36±9.25</td><td>21.89</td></tr><tr><td>-w/o captions</td><td>-</td><td>16.53</td></tr></table>",
|
| 863 |
+
"bbox": [
|
| 864 |
+
127,
|
| 865 |
+
80,
|
| 866 |
+
473,
|
| 867 |
+
162
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 5
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "text",
|
| 873 |
+
"text": "Table 1: Performance evaluation of PerceptiveAgent. BERTScore $(\\%)$ measures the quality of cognitive empathy in linguistic contents, while accuracy $(\\%)$ assesses the quality of affective empathy in acoustic responses.",
|
| 874 |
+
"bbox": [
|
| 875 |
+
112,
|
| 876 |
+
172,
|
| 877 |
+
487,
|
| 878 |
+
231
|
| 879 |
+
],
|
| 880 |
+
"page_idx": 5
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "text",
|
| 884 |
+
"text": "prosody of the generated audio response. Cognitive and affective empathy are assessed by evaluating the quality of generated textual responses and audio responses, respectively.",
|
| 885 |
+
"bbox": [
|
| 886 |
+
112,
|
| 887 |
+
277,
|
| 888 |
+
489,
|
| 889 |
+
342
|
| 890 |
+
],
|
| 891 |
+
"page_idx": 5
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "text",
|
| 895 |
+
"text": "To evaluate the quality of dialogue text generation, we employ the BERTScore automatic evaluation metric proposed by Zhang et al. (2020), which computes a similarity score for each token in the candidate sentence with each token in the reference sentence. To evaluate the expressiveness of audio generation, we employ an expressive style classifier proposed by Nguyen et al. (2023) to recognize emotion labels for both generated and true speeches. Classification accuracy is used to measure the performance.",
|
| 896 |
+
"bbox": [
|
| 897 |
+
112,
|
| 898 |
+
353,
|
| 899 |
+
489,
|
| 900 |
+
529
|
| 901 |
+
],
|
| 902 |
+
"page_idx": 5
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "text",
|
| 906 |
+
"text": "Besides, we evaluate the perception ability of the speech captioner on the validation and test datasets, which are split from the TextrolSpeech dataset. We approach this model as a multi-attribute classification task. Upon generating captions from speeches, the predicted labels for attributes including gender, emotion, pitch, speed and energy are determined by a sentence classifier, GPT-3.5-Turbo, while the true labels are provided in the TextrolSpeech dataset. Weighted metrics including precision, recall and F1-score are used to quantify the disparity between the predicted and true labels.",
|
| 907 |
+
"bbox": [
|
| 908 |
+
112,
|
| 909 |
+
539,
|
| 910 |
+
489,
|
| 911 |
+
734
|
| 912 |
+
],
|
| 913 |
+
"page_idx": 5
|
| 914 |
+
},
|
| 915 |
+
{
|
| 916 |
+
"type": "text",
|
| 917 |
+
"text": "Moreover, the expressiveness of the speech synthesizer is assessed on the validation and test datasets split from the EXPRESSO dataset. We use the same expressive style classifier employed in affective empathy evaluation, to measure the preservation of emotion in the resynthesized speech. For evaluating the preservation of prosody, we compute the F0 Frame Error (FFE), which measures the percentage of frames with a deviation of more than $20\\%$ in pitch value between the input and resynthesized output.",
|
| 918 |
+
"bbox": [
|
| 919 |
+
112,
|
| 920 |
+
744,
|
| 921 |
+
489,
|
| 922 |
+
921
|
| 923 |
+
],
|
| 924 |
+
"page_idx": 5
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "text",
|
| 928 |
+
"text": "4.3 Result Analysis",
|
| 929 |
+
"text_level": 1,
|
| 930 |
+
"bbox": [
|
| 931 |
+
507,
|
| 932 |
+
84,
|
| 933 |
+
678,
|
| 934 |
+
99
|
| 935 |
+
],
|
| 936 |
+
"page_idx": 5
|
| 937 |
+
},
|
| 938 |
+
{
|
| 939 |
+
"type": "text",
|
| 940 |
+
"text": "4.3.1 PerceptiveAgent",
|
| 941 |
+
"text_level": 1,
|
| 942 |
+
"bbox": [
|
| 943 |
+
507,
|
| 944 |
+
108,
|
| 945 |
+
697,
|
| 946 |
+
123
|
| 947 |
+
],
|
| 948 |
+
"page_idx": 5
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "text",
|
| 952 |
+
"text": "Table 1 presents the overall performance of PerceptiveAgent on cognitive empathy and affective empathy, evaluated on the generated content and audio, respectively. BERTScore measures the semantic similarity between the generated and real response contents, while accuracy assesses the similarity and diversity of emotions between the generated and real speeches. Overall, compared to Speech-GPT3.5, PerceptiveAgent demonstrates a strong ability in generating empathetic responses with a closer alignment to the dialogue context in terms of linguistic content and a higher expressiveness in acoustic information. Specifically, PerceptiveAgent achieves a slightly higher BERTScore than Speech-GPT3.5, primarily because our model can generate content that more accurately captures the speaker's intentions and contains more emotionally intense words. Additionally, PerceptiveAgent notably outperforms Speech-GPT3.5 in terms of accuracy, as the latter doesn't incorporate any emotion prompts during speech generation, thus maintaining a limited variety of prosody. Despite this, the accuracy of PerceptiveAgent still remains at a relatively moderate level. This is because the generated responses, while contextually appropriate, may not entirely align with the real responses in terms of semantics and emotions.",
|
| 953 |
+
"bbox": [
|
| 954 |
+
505,
|
| 955 |
+
128,
|
| 956 |
+
885,
|
| 957 |
+
563
|
| 958 |
+
],
|
| 959 |
+
"page_idx": 5
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "text",
|
| 963 |
+
"text": "4.3.2 Speech Captioner",
|
| 964 |
+
"text_level": 1,
|
| 965 |
+
"bbox": [
|
| 966 |
+
507,
|
| 967 |
+
577,
|
| 968 |
+
709,
|
| 969 |
+
593
|
| 970 |
+
],
|
| 971 |
+
"page_idx": 5
|
| 972 |
+
},
|
| 973 |
+
{
|
| 974 |
+
"type": "text",
|
| 975 |
+
"text": "Table 2 evaluates the speech captioner's generalization performance on both the validation and test sets. Overall, it is evident that that the model achieves the highest F1-score for gender, followed by pitch and emotion. This underscores the model's proficiency in accurately discerning these attributes from input speech. Besides, both gender and emotion exhibit closely aligned precision and recall metrics, affirming the model's predictive prowess for these attributes. Meanwhile, there exists a notable disparity between precision and recall when predicting energy, indicating variable performance and a tendency towards confident predictions. Conversely, the model's performance in predicting speed is unsatisfactory, which can be attributed to the imbalanced distribution of speed in the training dataset, with over $60\\%$ of samples labeled as \"neutral\".",
|
| 976 |
+
"bbox": [
|
| 977 |
+
505,
|
| 978 |
+
598,
|
| 979 |
+
885,
|
| 980 |
+
885
|
| 981 |
+
],
|
| 982 |
+
"page_idx": 5
|
| 983 |
+
},
|
| 984 |
+
{
|
| 985 |
+
"type": "text",
|
| 986 |
+
"text": "We also discuss how errors in speech processing are affected by demographics of the speakers. Ta",
|
| 987 |
+
"bbox": [
|
| 988 |
+
507,
|
| 989 |
+
889,
|
| 990 |
+
884,
|
| 991 |
+
921
|
| 992 |
+
],
|
| 993 |
+
"page_idx": 5
|
| 994 |
+
},
|
| 995 |
+
{
|
| 996 |
+
"type": "footer",
|
| 997 |
+
"text": "15014",
|
| 998 |
+
"bbox": [
|
| 999 |
+
475,
|
| 1000 |
+
927,
|
| 1001 |
+
524,
|
| 1002 |
+
939
|
| 1003 |
+
],
|
| 1004 |
+
"page_idx": 5
|
| 1005 |
+
},
|
| 1006 |
+
{
|
| 1007 |
+
"type": "page_number",
|
| 1008 |
+
"text": "6",
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
492,
|
| 1011 |
+
942,
|
| 1012 |
+
504,
|
| 1013 |
+
953
|
| 1014 |
+
],
|
| 1015 |
+
"page_idx": 5
|
| 1016 |
+
},
|
| 1017 |
+
{
|
| 1018 |
+
"type": "table",
|
| 1019 |
+
"img_path": "images/b534d1273249f2a6387ee44b1015776cfbeef595a972c87ebb7956f223397b7f.jpg",
|
| 1020 |
+
"table_caption": [],
|
| 1021 |
+
"table_footnote": [],
|
| 1022 |
+
"table_body": "<table><tr><td rowspan=\"2\">Attribute</td><td colspan=\"3\">Validation</td><td colspan=\"3\">Test</td></tr><tr><td>Precision</td><td>Recall</td><td>F1-score</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>Gender</td><td>99.3</td><td>97.5</td><td>98.4</td><td>99.3</td><td>98.6</td><td>99.0</td></tr><tr><td>Emotion</td><td>85.8</td><td>85.4</td><td>85.1</td><td>87.3</td><td>87.1</td><td>86.8</td></tr><tr><td>Pitch</td><td>85.6</td><td>76.8</td><td>80.4</td><td>79.6</td><td>72.1</td><td>75.3</td></tr><tr><td>Energy</td><td>72.4</td><td>57.4</td><td>63.1</td><td>77.7</td><td>65.3</td><td>69.9</td></tr><tr><td>Speed</td><td>47.2</td><td>36.7</td><td>41.3</td><td>48.5</td><td>41.5</td><td>44.7</td></tr></table>",
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
213,
|
| 1025 |
+
80,
|
| 1026 |
+
783,
|
| 1027 |
+
205
|
| 1028 |
+
],
|
| 1029 |
+
"page_idx": 6
|
| 1030 |
+
},
|
| 1031 |
+
{
|
| 1032 |
+
"type": "table",
|
| 1033 |
+
"img_path": "images/b3526c7384ede8696ba1cf986d92fa8e0dc2efafc954faee8b31a200b3f89bd3.jpg",
|
| 1034 |
+
"table_caption": [
|
| 1035 |
+
"Table 2: Performance evaluation of the speech captioner. Precision, recall and F1-score $(\\%)$ are utilized to measure its generalization ability on both the validation and test sets. Predicted labels are obtained through semantic classification on the generated captions, while the true labels are derived from the TextroSpeech dataset."
|
| 1036 |
+
],
|
| 1037 |
+
"table_footnote": [],
|
| 1038 |
+
"table_body": "<table><tr><td rowspan=\"2\">Attribute</td><td colspan=\"3\">Male</td><td colspan=\"3\">Female</td></tr><tr><td>Precision</td><td>Recall</td><td>F1-score</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>Emotion</td><td>84.3</td><td>85.4</td><td>84.2</td><td>87.4</td><td>85.5</td><td>86.0</td></tr><tr><td>Pitch</td><td>88.2</td><td>82.8</td><td>85.3</td><td>84.8</td><td>71.0</td><td>75.9</td></tr><tr><td>Energy</td><td>74.4</td><td>60.0</td><td>65.0</td><td>71.2</td><td>54.9</td><td>60.9</td></tr><tr><td>Speed</td><td>46.4</td><td>43.1</td><td>44.6</td><td>48.0</td><td>30.6</td><td>37.3</td></tr></table>",
|
| 1039 |
+
"bbox": [
|
| 1040 |
+
213,
|
| 1041 |
+
271,
|
| 1042 |
+
783,
|
| 1043 |
+
379
|
| 1044 |
+
],
|
| 1045 |
+
"page_idx": 6
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "table",
|
| 1049 |
+
"img_path": "images/a22dfb54ff6607e8cbab5d76c46ee0cd4ec0f3f117520b54492e9eea8c52fef9.jpg",
|
| 1050 |
+
"table_caption": [
|
| 1051 |
+
"Table 3: Comparison of the speech captioner's performance across genders."
|
| 1052 |
+
],
|
| 1053 |
+
"table_footnote": [],
|
| 1054 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"2\">Accuracy</td><td>FFE</td></tr><tr><td>Exp-R</td><td>Exp-I</td><td>Exp</td></tr><tr><td>GT</td><td>91.9</td><td>75.1</td><td>-</td></tr><tr><td>EXPRESSO</td><td>87.9</td><td>67.0</td><td>0.17±0.12</td></tr><tr><td>MSMA</td><td>83.8</td><td>70.8</td><td>0.39±0.16</td></tr></table>",
|
| 1055 |
+
"bbox": [
|
| 1056 |
+
131,
|
| 1057 |
+
426,
|
| 1058 |
+
470,
|
| 1059 |
+
518
|
| 1060 |
+
],
|
| 1061 |
+
"page_idx": 6
|
| 1062 |
+
},
|
| 1063 |
+
{
|
| 1064 |
+
"type": "text",
|
| 1065 |
+
"text": "Table 4: Preservation evaluation of MSMA-Synthesizer. Accuracy (\\%) is evaluated on EXPRESSO read (Exp-R) and conversation (Exp-I) dataset. F0 Frame Error (FFE) is calculated on EXPRESSO (Exp). GT represents the results of automatic metrics calculated on real audio. EXPRESSO and MSMA refer to the synthesizers in EXPRESSO and PerceptiveAgent respectively.",
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
112,
|
| 1068 |
+
527,
|
| 1069 |
+
489,
|
| 1070 |
+
629
|
| 1071 |
+
],
|
| 1072 |
+
"page_idx": 6
|
| 1073 |
+
},
|
| 1074 |
+
{
|
| 1075 |
+
"type": "text",
|
| 1076 |
+
"text": "ble 3 compares the performance of the speech captioner across genders, which represents the most prevalent factor. The F1-score on male speech surpasses that on female speech in terms of pitch, energy and speed, despite the comparable sample sizes for male and female groups (8634 VS. 8983). This demonstrates a variation in the model's performance depending on the gender of the speakers.",
|
| 1077 |
+
"bbox": [
|
| 1078 |
+
112,
|
| 1079 |
+
657,
|
| 1080 |
+
489,
|
| 1081 |
+
785
|
| 1082 |
+
],
|
| 1083 |
+
"page_idx": 6
|
| 1084 |
+
},
|
| 1085 |
+
{
|
| 1086 |
+
"type": "text",
|
| 1087 |
+
"text": "4.3.3 MSMA-Synthesizer",
|
| 1088 |
+
"text_level": 1,
|
| 1089 |
+
"bbox": [
|
| 1090 |
+
112,
|
| 1091 |
+
801,
|
| 1092 |
+
329,
|
| 1093 |
+
816
|
| 1094 |
+
],
|
| 1095 |
+
"page_idx": 6
|
| 1096 |
+
},
|
| 1097 |
+
{
|
| 1098 |
+
"type": "text",
|
| 1099 |
+
"text": "Table 4 assesses the MSMA-Synthesizer's ability to preserve emotion and prosody features on the test set, where the EXPRESSO Synthesizer is reproduced by us. The \"GT\" method represents the results of automatic metrics calculated on real audio. Clearly, the MSMA-Synthesizer achieves",
|
| 1100 |
+
"bbox": [
|
| 1101 |
+
112,
|
| 1102 |
+
824,
|
| 1103 |
+
489,
|
| 1104 |
+
921
|
| 1105 |
+
],
|
| 1106 |
+
"page_idx": 6
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "text",
|
| 1110 |
+
"text": "higher accuracy on the read dataset compared to EXPRESSO. This suggests that an integration of multiple attributes into speech synthesis can more effectively enable the model to synthesize emotionally expressive audio in dialogue scenarios, meeting the requirements of our system. However, there is a decrease in accuracy on the Exp-R dataset, which is relevant to the less apparent variation in prosody with emotional transitions. Additionally, in terms of FFE, it can be observed that incorporating multiple attributes into the MSMA-Synthesizer may lead to some degradation in speech synthesis quality. However this degradation remains within an acceptable range.",
|
| 1111 |
+
"bbox": [
|
| 1112 |
+
507,
|
| 1113 |
+
429,
|
| 1114 |
+
884,
|
| 1115 |
+
653
|
| 1116 |
+
],
|
| 1117 |
+
"page_idx": 6
|
| 1118 |
+
},
|
| 1119 |
+
{
|
| 1120 |
+
"type": "text",
|
| 1121 |
+
"text": "4.3.4 Ablation Study",
|
| 1122 |
+
"text_level": 1,
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
507,
|
| 1125 |
+
671,
|
| 1126 |
+
690,
|
| 1127 |
+
686
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 6
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "text",
|
| 1133 |
+
"text": "Effectiveness of Captions. The last line in Table 1 demonstrates the effectiveness of captions in PerceptiveAgent. The system without captions synthesizes speech using randomly selected labels for all four speaking attributes (pitch, speed, energy, and emotion), while maintaining the same response contents as the PerceptiveAgent. It is evident that the PerceptiveAgent outperforms the system without captions, highlighting the effectiveness of captions in generating speech with affective empathy.",
|
| 1134 |
+
"bbox": [
|
| 1135 |
+
507,
|
| 1136 |
+
694,
|
| 1137 |
+
882,
|
| 1138 |
+
869
|
| 1139 |
+
],
|
| 1140 |
+
"page_idx": 6
|
| 1141 |
+
},
|
| 1142 |
+
{
|
| 1143 |
+
"type": "text",
|
| 1144 |
+
"text": "Effectiveness of Style Factors. To discern the discrete impact of distinct speaking style factors, we conduct an ablation experiment by systematically",
|
| 1145 |
+
"bbox": [
|
| 1146 |
+
507,
|
| 1147 |
+
873,
|
| 1148 |
+
882,
|
| 1149 |
+
921
|
| 1150 |
+
],
|
| 1151 |
+
"page_idx": 6
|
| 1152 |
+
},
|
| 1153 |
+
{
|
| 1154 |
+
"type": "footer",
|
| 1155 |
+
"text": "15015",
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
475,
|
| 1158 |
+
927,
|
| 1159 |
+
524,
|
| 1160 |
+
939
|
| 1161 |
+
],
|
| 1162 |
+
"page_idx": 6
|
| 1163 |
+
},
|
| 1164 |
+
{
|
| 1165 |
+
"type": "page_number",
|
| 1166 |
+
"text": "7",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
492,
|
| 1169 |
+
941,
|
| 1170 |
+
504,
|
| 1171 |
+
953
|
| 1172 |
+
],
|
| 1173 |
+
"page_idx": 6
|
| 1174 |
+
},
|
| 1175 |
+
{
|
| 1176 |
+
"type": "table",
|
| 1177 |
+
"img_path": "images/1fd8fe786a02d35d30627c5919fc17f3b7ecfee028e10148aad0c6e2f3a70ddb.jpg",
|
| 1178 |
+
"table_caption": [],
|
| 1179 |
+
"table_footnote": [],
|
| 1180 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"2\">Accuracy</td><td>FFE</td></tr><tr><td>Exp-R</td><td>Exp-I</td><td>Exp</td></tr><tr><td>GT</td><td>91.9</td><td>75.1</td><td>-</td></tr><tr><td>EXPRESSO</td><td>87.9</td><td>67.0</td><td>0.17±0.12</td></tr><tr><td>MSMA</td><td>83.8</td><td>70.8</td><td>0.39±0.16</td></tr><tr><td>-style</td><td>82.2</td><td>69.0</td><td>0.40±0.16</td></tr><tr><td>-speed</td><td>31.8</td><td>9.2</td><td>0.44±0.13</td></tr><tr><td>-energy</td><td>31.0</td><td>9.1</td><td>0.44±0.13</td></tr><tr><td>-gender</td><td>30.8</td><td>8.7</td><td>0.44±0.13</td></tr><tr><td>-pitch</td><td>30.7</td><td>7.4</td><td>0.43±0.13</td></tr></table>",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
132,
|
| 1183 |
+
82,
|
| 1184 |
+
470,
|
| 1185 |
+
252
|
| 1186 |
+
],
|
| 1187 |
+
"page_idx": 7
|
| 1188 |
+
},
|
| 1189 |
+
{
|
| 1190 |
+
"type": "text",
|
| 1191 |
+
"text": "Table 5: Performance of the MSMA-Synthesizer conditioned on single speaking style factors.",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
112,
|
| 1194 |
+
263,
|
| 1195 |
+
489,
|
| 1196 |
+
293
|
| 1197 |
+
],
|
| 1198 |
+
"page_idx": 7
|
| 1199 |
+
},
|
| 1200 |
+
{
|
| 1201 |
+
"type": "text",
|
| 1202 |
+
"text": "varying each factor while maintaining the others at their default values. Table 5 presents that the model with style remained achieves the highest accuracy and the lowest FFE, while the models with the other factors exhibit similar performance. This underscores the predominant contribution of style to the effectiveness of expressive speech synthesis.",
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
112,
|
| 1205 |
+
318,
|
| 1206 |
+
489,
|
| 1207 |
+
431
|
| 1208 |
+
],
|
| 1209 |
+
"page_idx": 7
|
| 1210 |
+
},
|
| 1211 |
+
{
|
| 1212 |
+
"type": "text",
|
| 1213 |
+
"text": "5 Case Study",
|
| 1214 |
+
"text_level": 1,
|
| 1215 |
+
"bbox": [
|
| 1216 |
+
112,
|
| 1217 |
+
444,
|
| 1218 |
+
247,
|
| 1219 |
+
460
|
| 1220 |
+
],
|
| 1221 |
+
"page_idx": 7
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "text",
|
| 1225 |
+
"text": "Figure 3 presents two cases comparing the response quality between Speech-GPT3.5 and PerceptiveAgent. It demonstrates that by explicitly incorporating acoustic information through captions, the LLM can more accurately comprehend the speaker's intentions and generate more accurate and contextually appropriate responses. The first and second examples illustrate scenarios where the speaker's intention either contradicts or aligns with the linguistic contents, respectively.",
|
| 1226 |
+
"bbox": [
|
| 1227 |
+
112,
|
| 1228 |
+
470,
|
| 1229 |
+
489,
|
| 1230 |
+
631
|
| 1231 |
+
],
|
| 1232 |
+
"page_idx": 7
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "text",
|
| 1236 |
+
"text": "The first example in Figure 3 (a) depicts an unplanned meeting conversation between two friends. Analyzing solely from the textual contents, it is suggested that the speaker B is extremely excited and delighted about this conversation. However, a closer examination of the key words of \"lower vocal\" and \"subbed energy\" in speaker B's caption reveals an evasive attitude towards the situation. Consequently, when confronted with speaker A's question, \"Were you here waiting for me?\", it can be inferred that speaker B is not inclined to engage in extensive conversation. The absence of nuanced captions poses a challenge for SpeechGPT3.5, leading to a misinterpretation and generating a response that implies a strong desire to continue the conversation. In contrast, PerceptiveAgent provides a response in accordance with the underlying meaning. Therefore, despite po",
|
| 1237 |
+
"bbox": [
|
| 1238 |
+
112,
|
| 1239 |
+
631,
|
| 1240 |
+
490,
|
| 1241 |
+
921
|
| 1242 |
+
],
|
| 1243 |
+
"page_idx": 7
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "text",
|
| 1247 |
+
"text": "tential inconsistencies between linguistic contents and speaker intentions disrupting the accuracy of dialogue context understanding, PerceptiveAgent, with the aid of captions, can effectively capture the speaker's intent by correctly discerning the acoustic information of speech.",
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
507,
|
| 1250 |
+
84,
|
| 1251 |
+
884,
|
| 1252 |
+
180
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 7
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"text": "In the second example in Figure 3 (b), the speaker A receives a paper from his mother and intends to share it with his friends. It can be inferred that he is highly excited at the moment, as evidenced by the key words \"treble tone\" and \"energetically\" in the caption. Recognizing speaker A's excited mood, the response generated by Perceptive Agent mirrors the same enthusiasm and curiosity, aligning well with the ground truth. However, Speech-GPT3.5 fails to perceive speaker A's excitement and merely raises the question in a bland manner. Thus, in scenarios where the textual contents coincide with the speaker's intent, our model can also provide responses that correspond to the context of the conversation.",
|
| 1259 |
+
"bbox": [
|
| 1260 |
+
507,
|
| 1261 |
+
184,
|
| 1262 |
+
885,
|
| 1263 |
+
426
|
| 1264 |
+
],
|
| 1265 |
+
"page_idx": 7
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "text",
|
| 1269 |
+
"text": "6 Conclusion",
|
| 1270 |
+
"text_level": 1,
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
507,
|
| 1273 |
+
451,
|
| 1274 |
+
640,
|
| 1275 |
+
467
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 7
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "text",
|
| 1281 |
+
"text": "In this paper, we propose PerceptiveAgent, an empathetic multi-modal dialogue system capable of accurately discerning the speaker's intentions through the integration of perceptive speech captions and to respond with nuanced and expressive spoken dialogues. Specifically, PerceptiveAgent comprises three cascaded modules: a speech captioner for intention discernment, an LLM for comprehension through sensory integration, and an MSMA-Synthesizer for expressive speech synthesis. Initially, the system employs a perceptive captioner model to capture acoustic features from each speech within dialogues. Subsequently, an LLM module serves as the cognitive core, generating relevant response content with a caption conditioned on the comprehension of the speaker's intentions. An MSMA-Synthesizer is then developed to synthesize expressive speech. Experimental results indicate PerceptiveAgent's strong ability in empathetic response generation, closely aligning with the dialogue context in terms of linguistic contents and exhibiting high expressiveness in acoustic information. Additionally, a case study demonstrates PerceptiveAgent's capability to accurately identify the speaker's intentions in scenarios where the literal interpretations of words are either contrary to or inconsistent with the speaker's true feelings.",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
507,
|
| 1284 |
+
487,
|
| 1285 |
+
884,
|
| 1286 |
+
921
|
| 1287 |
+
],
|
| 1288 |
+
"page_idx": 7
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"type": "footer",
|
| 1292 |
+
"text": "15016",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
477,
|
| 1295 |
+
927,
|
| 1296 |
+
524,
|
| 1297 |
+
939
|
| 1298 |
+
],
|
| 1299 |
+
"page_idx": 7
|
| 1300 |
+
},
|
| 1301 |
+
{
|
| 1302 |
+
"type": "page_number",
|
| 1303 |
+
"text": "8",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
492,
|
| 1306 |
+
941,
|
| 1307 |
+
504,
|
| 1308 |
+
953
|
| 1309 |
+
],
|
| 1310 |
+
"page_idx": 7
|
| 1311 |
+
},
|
| 1312 |
+
{
|
| 1313 |
+
"type": "text",
|
| 1314 |
+
"text": "Input",
|
| 1315 |
+
"text_level": 1,
|
| 1316 |
+
"bbox": [
|
| 1317 |
+
473,
|
| 1318 |
+
93,
|
| 1319 |
+
522,
|
| 1320 |
+
108
|
| 1321 |
+
],
|
| 1322 |
+
"page_idx": 8
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "text",
|
| 1326 |
+
"text": "Speaker A: Are you being British?!",
|
| 1327 |
+
"bbox": [
|
| 1328 |
+
260,
|
| 1329 |
+
107,
|
| 1330 |
+
455,
|
| 1331 |
+
118
|
| 1332 |
+
],
|
| 1333 |
+
"page_idx": 8
|
| 1334 |
+
},
|
| 1335 |
+
{
|
| 1336 |
+
"type": "text",
|
| 1337 |
+
"text": "Caption: In a hushed voice, she speaks rapidly. Speaking softly, she maintains a high-pitched voice.",
|
| 1338 |
+
"bbox": [
|
| 1339 |
+
260,
|
| 1340 |
+
118,
|
| 1341 |
+
737,
|
| 1342 |
+
140
|
| 1343 |
+
],
|
| 1344 |
+
"page_idx": 8
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "text",
|
| 1348 |
+
"text": "Speaker B: No. Not anymore.",
|
| 1349 |
+
"bbox": [
|
| 1350 |
+
258,
|
| 1351 |
+
141,
|
| 1352 |
+
426,
|
| 1353 |
+
151
|
| 1354 |
+
],
|
| 1355 |
+
"page_idx": 8
|
| 1356 |
+
},
|
| 1357 |
+
{
|
| 1358 |
+
"type": "text",
|
| 1359 |
+
"text": "Caption: Employing a lower vocal quality and customary speaking speed, his expression radiates subdued energy.",
|
| 1360 |
+
"bbox": [
|
| 1361 |
+
258,
|
| 1362 |
+
152,
|
| 1363 |
+
737,
|
| 1364 |
+
174
|
| 1365 |
+
],
|
| 1366 |
+
"page_idx": 8
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "text",
|
| 1370 |
+
"text": "Speaker A: Why are you shopping here? You don't live in this neighborhood. Were you here waiting for me?",
|
| 1371 |
+
"bbox": [
|
| 1372 |
+
258,
|
| 1373 |
+
174,
|
| 1374 |
+
737,
|
| 1375 |
+
197
|
| 1376 |
+
],
|
| 1377 |
+
"page_idx": 8
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "text",
|
| 1381 |
+
"text": "Caption: Her subdued energy doesn't affect the high pitch of her voice at a regular speed.",
|
| 1382 |
+
"bbox": [
|
| 1383 |
+
258,
|
| 1384 |
+
197,
|
| 1385 |
+
739,
|
| 1386 |
+
209
|
| 1387 |
+
],
|
| 1388 |
+
"page_idx": 8
|
| 1389 |
+
},
|
| 1390 |
+
{
|
| 1391 |
+
"type": "text",
|
| 1392 |
+
"text": "Response",
|
| 1393 |
+
"text_level": 1,
|
| 1394 |
+
"bbox": [
|
| 1395 |
+
460,
|
| 1396 |
+
212,
|
| 1397 |
+
537,
|
| 1398 |
+
227
|
| 1399 |
+
],
|
| 1400 |
+
"page_idx": 8
|
| 1401 |
+
},
|
| 1402 |
+
{
|
| 1403 |
+
"type": "text",
|
| 1404 |
+
"text": "Ground Truth",
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
260,
|
| 1407 |
+
225,
|
| 1408 |
+
347,
|
| 1409 |
+
235
|
| 1410 |
+
],
|
| 1411 |
+
"page_idx": 8
|
| 1412 |
+
},
|
| 1413 |
+
{
|
| 1414 |
+
"type": "text",
|
| 1415 |
+
"text": "Speaker B: Yeah, huh. I'm just uh, you know I'm just picking up some things for a party. \nCaption: The boy's low energy is evident in his fast, low-pitched speech.",
|
| 1416 |
+
"bbox": [
|
| 1417 |
+
260,
|
| 1418 |
+
237,
|
| 1419 |
+
737,
|
| 1420 |
+
259
|
| 1421 |
+
],
|
| 1422 |
+
"page_idx": 8
|
| 1423 |
+
},
|
| 1424 |
+
{
|
| 1425 |
+
"type": "text",
|
| 1426 |
+
"text": "Speech-GPT3.5",
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
260,
|
| 1429 |
+
260,
|
| 1430 |
+
356,
|
| 1431 |
+
269
|
| 1432 |
+
],
|
| 1433 |
+
"page_idx": 8
|
| 1434 |
+
},
|
| 1435 |
+
{
|
| 1436 |
+
"type": "text",
|
| 1437 |
+
"text": "Speaker B: Yes, I was actually. I needed to talk to you about something important.",
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
260,
|
| 1440 |
+
271,
|
| 1441 |
+
712,
|
| 1442 |
+
282
|
| 1443 |
+
],
|
| 1444 |
+
"page_idx": 8
|
| 1445 |
+
},
|
| 1446 |
+
{
|
| 1447 |
+
"type": "text",
|
| 1448 |
+
"text": "PerceptiveAgent",
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
260,
|
| 1451 |
+
282,
|
| 1452 |
+
359,
|
| 1453 |
+
293
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 8
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "text",
|
| 1459 |
+
"text": "Speaker B: I needed to pick up a few things, and I happened to be in the area. That's all.",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
260,
|
| 1462 |
+
293,
|
| 1463 |
+
737,
|
| 1464 |
+
304
|
| 1465 |
+
],
|
| 1466 |
+
"page_idx": 8
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "text",
|
| 1470 |
+
"text": "Caption: Speaking with a low energy level, his slow speaking pace conveys a sense of calmness and control.",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
260,
|
| 1473 |
+
305,
|
| 1474 |
+
737,
|
| 1475 |
+
326
|
| 1476 |
+
],
|
| 1477 |
+
"page_idx": 8
|
| 1478 |
+
},
|
| 1479 |
+
{
|
| 1480 |
+
"type": "text",
|
| 1481 |
+
"text": "(a) Contradictory Example",
|
| 1482 |
+
"text_level": 1,
|
| 1483 |
+
"bbox": [
|
| 1484 |
+
413,
|
| 1485 |
+
350,
|
| 1486 |
+
584,
|
| 1487 |
+
363
|
| 1488 |
+
],
|
| 1489 |
+
"page_idx": 8
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"text": "Input",
|
| 1494 |
+
"text_level": 1,
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
473,
|
| 1497 |
+
388,
|
| 1498 |
+
521,
|
| 1499 |
+
401
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 8
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "text",
|
| 1505 |
+
"text": "Speaker A: Hey guys check it out! My mom sent me the paper!",
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
258,
|
| 1508 |
+
401,
|
| 1509 |
+
608,
|
| 1510 |
+
413
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 8
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "text",
|
| 1516 |
+
"text": "Caption: With a treble tone, she converses naturally and energetically. Her speaking rate is normal, despite her high-pitched voice.",
|
| 1517 |
+
"bbox": [
|
| 1518 |
+
258,
|
| 1519 |
+
413,
|
| 1520 |
+
737,
|
| 1521 |
+
435
|
| 1522 |
+
],
|
| 1523 |
+
"page_idx": 8
|
| 1524 |
+
},
|
| 1525 |
+
{
|
| 1526 |
+
"type": "text",
|
| 1527 |
+
"text": "Response",
|
| 1528 |
+
"text_level": 1,
|
| 1529 |
+
"bbox": [
|
| 1530 |
+
458,
|
| 1531 |
+
439,
|
| 1532 |
+
536,
|
| 1533 |
+
454
|
| 1534 |
+
],
|
| 1535 |
+
"page_idx": 8
|
| 1536 |
+
},
|
| 1537 |
+
{
|
| 1538 |
+
"type": "text",
|
| 1539 |
+
"text": "Ground Truth",
|
| 1540 |
+
"bbox": [
|
| 1541 |
+
258,
|
| 1542 |
+
451,
|
| 1543 |
+
346,
|
| 1544 |
+
462
|
| 1545 |
+
],
|
| 1546 |
+
"page_idx": 8
|
| 1547 |
+
},
|
| 1548 |
+
{
|
| 1549 |
+
"type": "text",
|
| 1550 |
+
"text": "Speaker B: Ooh, let's see it!",
|
| 1551 |
+
"bbox": [
|
| 1552 |
+
258,
|
| 1553 |
+
464,
|
| 1554 |
+
416,
|
| 1555 |
+
474
|
| 1556 |
+
],
|
| 1557 |
+
"page_idx": 8
|
| 1558 |
+
},
|
| 1559 |
+
{
|
| 1560 |
+
"type": "text",
|
| 1561 |
+
"text": "Caption: Speaking with a treble, she maintains a regular conversational pace. Speaking with normal energy, she maintains a treble tone.",
|
| 1562 |
+
"bbox": [
|
| 1563 |
+
258,
|
| 1564 |
+
475,
|
| 1565 |
+
737,
|
| 1566 |
+
497
|
| 1567 |
+
],
|
| 1568 |
+
"page_idx": 8
|
| 1569 |
+
},
|
| 1570 |
+
{
|
| 1571 |
+
"type": "text",
|
| 1572 |
+
"text": "Speech-GPT3.5",
|
| 1573 |
+
"bbox": [
|
| 1574 |
+
258,
|
| 1575 |
+
498,
|
| 1576 |
+
354,
|
| 1577 |
+
508
|
| 1578 |
+
],
|
| 1579 |
+
"page_idx": 8
|
| 1580 |
+
},
|
| 1581 |
+
{
|
| 1582 |
+
"type": "text",
|
| 1583 |
+
"text": "Speaker B: What does it say?",
|
| 1584 |
+
"bbox": [
|
| 1585 |
+
258,
|
| 1586 |
+
508,
|
| 1587 |
+
423,
|
| 1588 |
+
520
|
| 1589 |
+
],
|
| 1590 |
+
"page_idx": 8
|
| 1591 |
+
},
|
| 1592 |
+
{
|
| 1593 |
+
"type": "text",
|
| 1594 |
+
"text": "PerceptiveAgent",
|
| 1595 |
+
"bbox": [
|
| 1596 |
+
258,
|
| 1597 |
+
521,
|
| 1598 |
+
357,
|
| 1599 |
+
531
|
| 1600 |
+
],
|
| 1601 |
+
"page_idx": 8
|
| 1602 |
+
},
|
| 1603 |
+
{
|
| 1604 |
+
"type": "text",
|
| 1605 |
+
"text": "Speaker B: That's great! Let's take a look at it together.",
|
| 1606 |
+
"bbox": [
|
| 1607 |
+
258,
|
| 1608 |
+
532,
|
| 1609 |
+
561,
|
| 1610 |
+
543
|
| 1611 |
+
],
|
| 1612 |
+
"page_idx": 8
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "text",
|
| 1616 |
+
"text": "Caption: A speaker with a low tone and normal speed, who is energetically speaking to others and taking interest in the conversation.",
|
| 1617 |
+
"bbox": [
|
| 1618 |
+
258,
|
| 1619 |
+
543,
|
| 1620 |
+
737,
|
| 1621 |
+
565
|
| 1622 |
+
],
|
| 1623 |
+
"page_idx": 8
|
| 1624 |
+
},
|
| 1625 |
+
{
|
| 1626 |
+
"type": "text",
|
| 1627 |
+
"text": "(b) Consistent Example",
|
| 1628 |
+
"text_level": 1,
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
423,
|
| 1631 |
+
596,
|
| 1632 |
+
573,
|
| 1633 |
+
611
|
| 1634 |
+
],
|
| 1635 |
+
"page_idx": 8
|
| 1636 |
+
},
|
| 1637 |
+
{
|
| 1638 |
+
"type": "text",
|
| 1639 |
+
"text": "Figure 3: Cases comparing the response quality between Speech-GPT3.5 and PerceptiveAgent.",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
174,
|
| 1642 |
+
626,
|
| 1643 |
+
818,
|
| 1644 |
+
642
|
| 1645 |
+
],
|
| 1646 |
+
"page_idx": 8
|
| 1647 |
+
},
|
| 1648 |
+
{
|
| 1649 |
+
"type": "text",
|
| 1650 |
+
"text": "7 Limitations",
|
| 1651 |
+
"text_level": 1,
|
| 1652 |
+
"bbox": [
|
| 1653 |
+
112,
|
| 1654 |
+
665,
|
| 1655 |
+
250,
|
| 1656 |
+
680
|
| 1657 |
+
],
|
| 1658 |
+
"page_idx": 8
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "text",
|
| 1662 |
+
"text": "Although PerceptiveAgent excels at providing empathetic responses in terms of both linguistic and acoustic contents, several limitations can be observed in this system: 1) Dataset Limitation: PerceptiveAgent's perception ability is currently constrained by the comprehensiveness of the training dataset in describing speech information. Presently, it is unable to discern speaker identity and background noise from speech; 2) Time Delay Limitation: PerceptiveAgent is a system cascaded by three interconnected components, which introduces accumulated delays to the response time, and 3) Length Limitation: The maximum token length in LLMs may limit the multi-turn dialogue.",
|
| 1663 |
+
"bbox": [
|
| 1664 |
+
112,
|
| 1665 |
+
696,
|
| 1666 |
+
490,
|
| 1667 |
+
921
|
| 1668 |
+
],
|
| 1669 |
+
"page_idx": 8
|
| 1670 |
+
},
|
| 1671 |
+
{
|
| 1672 |
+
"type": "text",
|
| 1673 |
+
"text": "Acknowledgements",
|
| 1674 |
+
"text_level": 1,
|
| 1675 |
+
"bbox": [
|
| 1676 |
+
509,
|
| 1677 |
+
665,
|
| 1678 |
+
682,
|
| 1679 |
+
682
|
| 1680 |
+
],
|
| 1681 |
+
"page_idx": 8
|
| 1682 |
+
},
|
| 1683 |
+
{
|
| 1684 |
+
"type": "text",
|
| 1685 |
+
"text": "This research was supported by the National Natural Science Foundation of China (Grant No. 62276245).",
|
| 1686 |
+
"bbox": [
|
| 1687 |
+
507,
|
| 1688 |
+
692,
|
| 1689 |
+
885,
|
| 1690 |
+
739
|
| 1691 |
+
],
|
| 1692 |
+
"page_idx": 8
|
| 1693 |
+
},
|
| 1694 |
+
{
|
| 1695 |
+
"type": "text",
|
| 1696 |
+
"text": "References",
|
| 1697 |
+
"text_level": 1,
|
| 1698 |
+
"bbox": [
|
| 1699 |
+
509,
|
| 1700 |
+
766,
|
| 1701 |
+
608,
|
| 1702 |
+
782
|
| 1703 |
+
],
|
| 1704 |
+
"page_idx": 8
|
| 1705 |
+
},
|
| 1706 |
+
{
|
| 1707 |
+
"type": "text",
|
| 1708 |
+
"text": "Feilong Chen, Minglun Han, Haozhi Zhao, Qingyang Zhang, Jing Shi, Shuang Xu, and Bo Xu. 2023. X-LLM: bootstrapping advanced large language models by treating multi-modalities as foreign languages. CoRR, abs/2305.04160.",
|
| 1709 |
+
"bbox": [
|
| 1710 |
+
509,
|
| 1711 |
+
790,
|
| 1712 |
+
885,
|
| 1713 |
+
856
|
| 1714 |
+
],
|
| 1715 |
+
"page_idx": 8
|
| 1716 |
+
},
|
| 1717 |
+
{
|
| 1718 |
+
"type": "text",
|
| 1719 |
+
"text": "Jian Cong, Shan Yang, Na Hu, Guangzhi Li, Lei Xie, and Dan Su. 2021. Controllable context-aware conversational speech synthesis. In Interspeech, pages 4658-4662. ISCA.",
|
| 1720 |
+
"bbox": [
|
| 1721 |
+
509,
|
| 1722 |
+
866,
|
| 1723 |
+
885,
|
| 1724 |
+
919
|
| 1725 |
+
],
|
| 1726 |
+
"page_idx": 8
|
| 1727 |
+
},
|
| 1728 |
+
{
|
| 1729 |
+
"type": "footer",
|
| 1730 |
+
"text": "15017",
|
| 1731 |
+
"bbox": [
|
| 1732 |
+
475,
|
| 1733 |
+
927,
|
| 1734 |
+
524,
|
| 1735 |
+
940
|
| 1736 |
+
],
|
| 1737 |
+
"page_idx": 8
|
| 1738 |
+
},
|
| 1739 |
+
{
|
| 1740 |
+
"type": "page_number",
|
| 1741 |
+
"text": "9",
|
| 1742 |
+
"bbox": [
|
| 1743 |
+
492,
|
| 1744 |
+
941,
|
| 1745 |
+
504,
|
| 1746 |
+
953
|
| 1747 |
+
],
|
| 1748 |
+
"page_idx": 8
|
| 1749 |
+
},
|
| 1750 |
+
{
|
| 1751 |
+
"type": "list",
|
| 1752 |
+
"sub_type": "ref_text",
|
| 1753 |
+
"list_items": [
|
| 1754 |
+
"Benjamin MP Cuff, Sarah J Brown, Laura Taylor, and Douglas J Howat. 2016. Empathy: A review of the concept. Emotion review, 8(2):144-153.",
|
| 1755 |
+
"Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. 2023. Imagebind one embedding space to bind them all. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15180-15190. IEEE.",
|
| 1756 |
+
"Haohan Guo, Shaofei Zhang, Frank K. Soong, Lei He, and Lei Xie. 2021. Conversational end-to-end TTS for voice agents. In IEEE Spoken Language Technology Workshop, pages 403-409. IEEE.",
|
| 1757 |
+
"Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, and Xu Tan. 2023. Prompts: Controllable text-to-speech with text descriptions. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.",
|
| 1758 |
+
"Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451-3460.",
|
| 1759 |
+
"Rongjie Huang, Mingze Li, Dongchao Yang, Jia-tong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Yuexian Zou, Zhou Zhao, and Shinji Watanabe. 2024. Audiogpt: Understanding and generating speech, music, sound, and talking head. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 23802-23804. AAAI Press.",
|
| 1760 |
+
"Keith Ito and Linda Johnson. 2017. The LJ speech dataset. https://keithito.com/LJ-Speech-Dataset/.",
|
| 1761 |
+
"Shengpeng Ji, Jialong Zuo, Minghui Fang, Ziyue Jiang, Feiyang Chen, Xinyu Duan, Baoxing Huai, and Zhou Zhao. 2023. Textrolspeech: A text style control speech corpus with codec language text-to-speech models. CoRR, abs/2308.14430.",
|
| 1762 |
+
"Eugene Kharitonov, Jade Copet, Kushal Lakhotia, Tu Anh Nguyen, Paden Tomasello, Ann Lee, Ali Elkahky, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux, and Yossi Adi. 2022. textless: a library for textless spoken language processing. CoRR, abs/2202.07359.",
|
| 1763 |
+
"Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2021. Perspective-taking and pragmatics for generating empathetic responses focused on emotion causes. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2227-2240. Association for Computational Linguistics.",
|
| 1764 |
+
"Minkyu Kim, Kim Sung-Bin, and Tae-Hyun Oh. 2023. Prefix tuning for automated audio captioning."
|
| 1765 |
+
],
|
| 1766 |
+
"bbox": [
|
| 1767 |
+
115,
|
| 1768 |
+
85,
|
| 1769 |
+
485,
|
| 1770 |
+
920
|
| 1771 |
+
],
|
| 1772 |
+
"page_idx": 9
|
| 1773 |
+
},
|
| 1774 |
+
{
|
| 1775 |
+
"type": "list",
|
| 1776 |
+
"sub_type": "ref_text",
|
| 1777 |
+
"list_items": [
|
| 1778 |
+
"In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.",
|
| 1779 |
+
"Matthew Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, and Wei-Ning Hsu. 2023. Voicebox: Text-guided multilingual universal speech generation at scale. In Advances in Neural Information Processing Systems.",
|
| 1780 |
+
"Yichong Leng, Zhifang Guo, Kai Shen, Xu Tan, Zeqian Ju, Yanqing Liu, Yufei Liu, Dongchao Yang, Leying Zhang, Kaitao Song, Lei He, Xiang-Yang Li, Sheng Zhao, Tao Qin, and Jiang Bian. 2023. Promptts 2: Describing and generating voices with text prompt. CoRR, abs/2309.02285.",
|
| 1781 |
+
"Junnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi. 2023. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 19730-19742. PMLR.",
|
| 1782 |
+
"Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. 2022. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 12888-12900. PMLR.",
|
| 1783 |
+
"Chaohu Liu, Kun Yin, Haoyu Cao, Xinghua Jiang, Xin Li, Yinsong Liu, Deqiang Jiang, Xing Sun, and Linli Xu. 2024. HRVDA: high-resolution visual document assistant. CoRR, abs/2404.06918.",
|
| 1784 |
+
"Kentaro Mitsui, Yukiya Hono, and Kei Sawada. 2023. Towards human-like spoken dialogue generation between AI agents from written dialogue. CoRR, abs/2310.01088.",
|
| 1785 |
+
"Michael Negnevitsky. 2005. Artificial intelligence: a guide to intelligent systems. Pearson education.",
|
| 1786 |
+
"Tu Anh Nguyen, Wei-Ning Hsu, Antony D'Avirro, Bowen Shi, Itai Gat, Maryam Fazel-Zarandi, Tal Remez, Jade Copet, Gabriel Synnaeve, Michael Hassid, Felix Kreuk, Yossi Adi, and Emmanuel Dupoux. 2023. EXPRESSO: A benchmark and analysis of discrete expressive speech resynthesis. CoRR, abs/2308.05725.",
|
| 1787 |
+
"Tu Anh Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei-Ning Hsu, Ali Elkahky, Paden Tomasello, Robin Algayres, Benoit Sagot, Abdelrahman Mohamed, and Emmanuel Dupoux. 2022. Generative spoken dialogue language modeling. CoRR, abs/2203.16502.",
|
| 1788 |
+
"Yuto Nishimura, Yuki Saito, Shinnosuke Takamichi, Kentaro Tachibana, and Hiroshi Saruwatari. 2022. Acoustic modeling for end-to-end empathetic dialogue speech synthesis using linguistic and prosodic"
|
| 1789 |
+
],
|
| 1790 |
+
"bbox": [
|
| 1791 |
+
510,
|
| 1792 |
+
85,
|
| 1793 |
+
880,
|
| 1794 |
+
920
|
| 1795 |
+
],
|
| 1796 |
+
"page_idx": 9
|
| 1797 |
+
},
|
| 1798 |
+
{
|
| 1799 |
+
"type": "footer",
|
| 1800 |
+
"text": "15018",
|
| 1801 |
+
"bbox": [
|
| 1802 |
+
477,
|
| 1803 |
+
928,
|
| 1804 |
+
524,
|
| 1805 |
+
938
|
| 1806 |
+
],
|
| 1807 |
+
"page_idx": 9
|
| 1808 |
+
},
|
| 1809 |
+
{
|
| 1810 |
+
"type": "page_number",
|
| 1811 |
+
"text": "10",
|
| 1812 |
+
"bbox": [
|
| 1813 |
+
490,
|
| 1814 |
+
942,
|
| 1815 |
+
507,
|
| 1816 |
+
953
|
| 1817 |
+
],
|
| 1818 |
+
"page_idx": 9
|
| 1819 |
+
},
|
| 1820 |
+
{
|
| 1821 |
+
"type": "list",
|
| 1822 |
+
"sub_type": "ref_text",
|
| 1823 |
+
"list_items": [
|
| 1824 |
+
"contexts of dialogue history. In Interspeech, pages 3373-3377. ISCA.",
|
| 1825 |
+
"Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with GPT-4. CoRR, abs/2304.03277.",
|
| 1826 |
+
"Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled self-supervised representations. In *Interspeech*, pages 3615-3619. ISCA.",
|
| 1827 |
+
"Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 527-536. Association for Computational Linguistics.",
|
| 1828 |
+
"Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
|
| 1829 |
+
"Harry T Reis, Michael R Maniaci, Peter A Caprariello, Paul W Eastwick, and Eli J Finkel. 2011. Familiarity does indeed promote attraction in live interaction. Journal of personality and social psychology, 101(3):557.",
|
| 1830 |
+
"Stuart J Russell and Peter Norvig. 2010. Artificial intelligence a modern approach. London.",
|
| 1831 |
+
"Sahand Sabour, Chujie Zheng, and Minlie Huang. 2022. CEM: commonsense-aware empathetic response generation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 11229-11237. AAAI Press.",
|
| 1832 |
+
"Yuki Saito, Shinnosuke Takamichi, Eiji Ilimori, Kentaro Tachibana, and Hiroshi Saruwatari. 2023. Chatgptedss: Empathetic dialogue speech synthesis trained from chatgpt-derived context word embeddings. CoRR, abs/2305.13724.",
|
| 1833 |
+
"Timothy Schaumlöffel, Martina G Vilas, and Gemma Roig. 2023. Peacs: Prefix encoding for auditory caption synthesis. In Proceedings of the Detection and Classification of Acoustic. Scenes Events Challenge, pages 1-3.",
|
| 1834 |
+
"Murray Shanahan. 2024. Talking about large language models. Commun. ACM, 67(2):68-79.",
|
| 1835 |
+
"Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yichong Leng, Lei He, Tao Qin, Sheng Zhao, and Jiang Bian. 2023. Naturalspeech 2: Latent diffusion models are natural and zero-shot speech and singing synthesizers. CoRR, abs/2304.09116.",
|
| 1836 |
+
"Reo Shimizu, Ryuichi Yamamoto, Masaya Kawamura, Yuma Shirahata, Hironori Doi, Tatsuya Komatsu, and Kentaro Tachibana. 2023. Promptts++: Controlling speaker identity in prompt-based text-to-speech using natural language descriptions. CoRR, abs/2309.08140."
|
| 1837 |
+
],
|
| 1838 |
+
"bbox": [
|
| 1839 |
+
115,
|
| 1840 |
+
85,
|
| 1841 |
+
485,
|
| 1842 |
+
917
|
| 1843 |
+
],
|
| 1844 |
+
"page_idx": 10
|
| 1845 |
+
},
|
| 1846 |
+
{
|
| 1847 |
+
"type": "list",
|
| 1848 |
+
"sub_type": "ref_text",
|
| 1849 |
+
"list_items": [
|
| 1850 |
+
"Adam Smith. 2006. Cognitive empathy and emotional empathy in human behavior and evolution. *The Psychological Record*, 56(1):3-21.",
|
| 1851 |
+
"Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for science. CoRR, abs/2211.09085.",
|
| 1852 |
+
"Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinez, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. CoRR, abs/2307.09288.",
|
| 1853 |
+
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.",
|
| 1854 |
+
"Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. Neural codec language models are zero-shot text to speech synthesizers. CoRR, abs/2301.02111.",
|
| 1855 |
+
"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, pages 24824-24837.",
|
| 1856 |
+
"Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, and TatSeng Chua. 2023. Next-gpt: Any-to-any multimodal LLM. CoRR, abs/2309.05519.",
|
| 1857 |
+
"Yihan Wu, Xu Tan, Bohan Li, Lei He, Sheng Zhao, Ruihua Song, Tao Qin, and Tie-Yan Liu. 2022. Adaspeech 4: Adaptive text to speech in zero-shot scenarios. In *Interspeech*, pages 2568-2572. ISCA.",
|
| 1858 |
+
"Yaoxun Xu, Hangting Chen, Jianwei Yu, Qiaochu Huang, Zhiyong Wu, Shi-Xiong Zhang, Guangzhi"
|
| 1859 |
+
],
|
| 1860 |
+
"bbox": [
|
| 1861 |
+
510,
|
| 1862 |
+
85,
|
| 1863 |
+
880,
|
| 1864 |
+
920
|
| 1865 |
+
],
|
| 1866 |
+
"page_idx": 10
|
| 1867 |
+
},
|
| 1868 |
+
{
|
| 1869 |
+
"type": "footer",
|
| 1870 |
+
"text": "15019",
|
| 1871 |
+
"bbox": [
|
| 1872 |
+
477,
|
| 1873 |
+
927,
|
| 1874 |
+
524,
|
| 1875 |
+
938
|
| 1876 |
+
],
|
| 1877 |
+
"page_idx": 10
|
| 1878 |
+
},
|
| 1879 |
+
{
|
| 1880 |
+
"type": "page_number",
|
| 1881 |
+
"text": "11",
|
| 1882 |
+
"bbox": [
|
| 1883 |
+
490,
|
| 1884 |
+
942,
|
| 1885 |
+
507,
|
| 1886 |
+
953
|
| 1887 |
+
],
|
| 1888 |
+
"page_idx": 10
|
| 1889 |
+
},
|
| 1890 |
+
{
|
| 1891 |
+
"type": "list",
|
| 1892 |
+
"sub_type": "ref_text",
|
| 1893 |
+
"list_items": [
|
| 1894 |
+
"Li, Yi Luo, and Rongzhi Gu. 2024. Secap: Speech emotion captioning with large language model. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19323-19331. AAAI Press.",
|
| 1895 |
+
"Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald. 2019. Cstr vctk corpus: English multispeaker corpus for cstr voice cloning toolkit (version 0.92). https://doi.org/10.7488/ds/2645.",
|
| 1896 |
+
"Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. 2023. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15757-15773. Association for Computational Linguistics.",
|
| 1897 |
+
"Fang Zhang, Yongxin Zhu, Xiangxiang Wang, Huang Chen, Xing Sun, and Linli Xu. 2024. Visual hallucination elevates speech recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19542-19550. AAAI Press.",
|
| 1898 |
+
"Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In International Conference on Learning Representations. OpenReview.net.",
|
| 1899 |
+
"Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, and Bingyi Kang. 2023. Bubogpt: Enabling visual grounding in multi-modal llms. CoRR, abs/2307.08581.",
|
| 1900 |
+
"Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. CoRR, abs/2304.10592."
|
| 1901 |
+
],
|
| 1902 |
+
"bbox": [
|
| 1903 |
+
115,
|
| 1904 |
+
85,
|
| 1905 |
+
487,
|
| 1906 |
+
580
|
| 1907 |
+
],
|
| 1908 |
+
"page_idx": 11
|
| 1909 |
+
},
|
| 1910 |
+
{
|
| 1911 |
+
"type": "page_number",
|
| 1912 |
+
"text": "15020 12",
|
| 1913 |
+
"bbox": [
|
| 1914 |
+
477,
|
| 1915 |
+
928,
|
| 1916 |
+
524,
|
| 1917 |
+
952
|
| 1918 |
+
],
|
| 1919 |
+
"page_idx": 11
|
| 1920 |
+
},
|
| 1921 |
+
{
|
| 1922 |
+
"type": "text",
|
| 1923 |
+
"text": "A Prompt for Dialogue Generation with Captions",
|
| 1924 |
+
"text_level": 1,
|
| 1925 |
+
"bbox": [
|
| 1926 |
+
114,
|
| 1927 |
+
83,
|
| 1928 |
+
559,
|
| 1929 |
+
99
|
| 1930 |
+
],
|
| 1931 |
+
"page_idx": 12
|
| 1932 |
+
},
|
| 1933 |
+
{
|
| 1934 |
+
"type": "text",
|
| 1935 |
+
"text": "You are the last speaker in the following daily dialogue.",
|
| 1936 |
+
"bbox": [
|
| 1937 |
+
188,
|
| 1938 |
+
142,
|
| 1939 |
+
532,
|
| 1940 |
+
156
|
| 1941 |
+
],
|
| 1942 |
+
"page_idx": 12
|
| 1943 |
+
},
|
| 1944 |
+
{
|
| 1945 |
+
"type": "text",
|
| 1946 |
+
"text": "Each speaker is provided with a speaking style caption after the conversation, which contains gender, speaking speed, pitch, energy and emotion. You MUST give a response FIRST depending on the dialogue history, followed by a speaking style caption.",
|
| 1947 |
+
"bbox": [
|
| 1948 |
+
186,
|
| 1949 |
+
158,
|
| 1950 |
+
800,
|
| 1951 |
+
204
|
| 1952 |
+
],
|
| 1953 |
+
"page_idx": 12
|
| 1954 |
+
},
|
| 1955 |
+
{
|
| 1956 |
+
"type": "text",
|
| 1957 |
+
"text": "NOTE: It is important to recognize the speaker's intention from the speaking style before generating response. You MUST keep response as short as possible.",
|
| 1958 |
+
"bbox": [
|
| 1959 |
+
186,
|
| 1960 |
+
206,
|
| 1961 |
+
796,
|
| 1962 |
+
236
|
| 1963 |
+
],
|
| 1964 |
+
"page_idx": 12
|
| 1965 |
+
},
|
| 1966 |
+
{
|
| 1967 |
+
"type": "text",
|
| 1968 |
+
"text": "Here is an example:",
|
| 1969 |
+
"bbox": [
|
| 1970 |
+
188,
|
| 1971 |
+
254,
|
| 1972 |
+
314,
|
| 1973 |
+
268
|
| 1974 |
+
],
|
| 1975 |
+
"page_idx": 12
|
| 1976 |
+
},
|
| 1977 |
+
{
|
| 1978 |
+
"type": "text",
|
| 1979 |
+
"text": "Input:",
|
| 1980 |
+
"bbox": [
|
| 1981 |
+
188,
|
| 1982 |
+
287,
|
| 1983 |
+
228,
|
| 1984 |
+
300
|
| 1985 |
+
],
|
| 1986 |
+
"page_idx": 12
|
| 1987 |
+
},
|
| 1988 |
+
{
|
| 1989 |
+
"type": "text",
|
| 1990 |
+
"text": "Speaker A: My specimen is deposited into the container in the room. Janice! You're not... gone?",
|
| 1991 |
+
"bbox": [
|
| 1992 |
+
186,
|
| 1993 |
+
302,
|
| 1994 |
+
776,
|
| 1995 |
+
316
|
| 1996 |
+
],
|
| 1997 |
+
"page_idx": 12
|
| 1998 |
+
},
|
| 1999 |
+
{
|
| 2000 |
+
"type": "text",
|
| 2001 |
+
"text": "Speaking in a high pitch, the male speaker conveyed a touch of low energy during normal-paced conversation.",
|
| 2002 |
+
"bbox": [
|
| 2003 |
+
186,
|
| 2004 |
+
318,
|
| 2005 |
+
796,
|
| 2006 |
+
347
|
| 2007 |
+
],
|
| 2008 |
+
"page_idx": 12
|
| 2009 |
+
},
|
| 2010 |
+
{
|
| 2011 |
+
"type": "text",
|
| 2012 |
+
"text": "Speaker B: Oh! Sid is still in his room. So did you do it? Did you make your deposit?",
|
| 2013 |
+
"bbox": [
|
| 2014 |
+
186,
|
| 2015 |
+
366,
|
| 2016 |
+
712,
|
| 2017 |
+
381
|
| 2018 |
+
],
|
| 2019 |
+
"page_idx": 12
|
| 2020 |
+
},
|
| 2021 |
+
{
|
| 2022 |
+
"type": "text",
|
| 2023 |
+
"text": "Speaking swiftly, her voice remains soft and steady. With little energy, her voice reaches a high pitch.",
|
| 2024 |
+
"bbox": [
|
| 2025 |
+
186,
|
| 2026 |
+
382,
|
| 2027 |
+
800,
|
| 2028 |
+
397
|
| 2029 |
+
],
|
| 2030 |
+
"page_idx": 12
|
| 2031 |
+
},
|
| 2032 |
+
{
|
| 2033 |
+
"type": "text",
|
| 2034 |
+
"text": "Speaker A: Yeah! yeah... The hard part is over!",
|
| 2035 |
+
"bbox": [
|
| 2036 |
+
186,
|
| 2037 |
+
414,
|
| 2038 |
+
478,
|
| 2039 |
+
429
|
| 2040 |
+
],
|
| 2041 |
+
"page_idx": 12
|
| 2042 |
+
},
|
| 2043 |
+
{
|
| 2044 |
+
"type": "text",
|
| 2045 |
+
"text": "The man's words are barely audible, but he keeps up the standard pace.",
|
| 2046 |
+
"bbox": [
|
| 2047 |
+
186,
|
| 2048 |
+
430,
|
| 2049 |
+
623,
|
| 2050 |
+
445
|
| 2051 |
+
],
|
| 2052 |
+
"page_idx": 12
|
| 2053 |
+
},
|
| 2054 |
+
{
|
| 2055 |
+
"type": "text",
|
| 2056 |
+
"text": "Speaker B:",
|
| 2057 |
+
"bbox": [
|
| 2058 |
+
188,
|
| 2059 |
+
463,
|
| 2060 |
+
260,
|
| 2061 |
+
476
|
| 2062 |
+
],
|
| 2063 |
+
"page_idx": 12
|
| 2064 |
+
},
|
| 2065 |
+
{
|
| 2066 |
+
"type": "text",
|
| 2067 |
+
"text": "Output:",
|
| 2068 |
+
"bbox": [
|
| 2069 |
+
188,
|
| 2070 |
+
495,
|
| 2071 |
+
238,
|
| 2072 |
+
508
|
| 2073 |
+
],
|
| 2074 |
+
"page_idx": 12
|
| 2075 |
+
},
|
| 2076 |
+
{
|
| 2077 |
+
"type": "text",
|
| 2078 |
+
"text": "That's not the hard part honey! The hard part is what comes next, I mean aren't you worried about the results?",
|
| 2079 |
+
"bbox": [
|
| 2080 |
+
186,
|
| 2081 |
+
510,
|
| 2082 |
+
796,
|
| 2083 |
+
539
|
| 2084 |
+
],
|
| 2085 |
+
"page_idx": 12
|
| 2086 |
+
},
|
| 2087 |
+
{
|
| 2088 |
+
"type": "text",
|
| 2089 |
+
"text": "A woman speaks quickly and her high-pitched tone is a trademark of her dazed.",
|
| 2090 |
+
"bbox": [
|
| 2091 |
+
186,
|
| 2092 |
+
542,
|
| 2093 |
+
678,
|
| 2094 |
+
557
|
| 2095 |
+
],
|
| 2096 |
+
"page_idx": 12
|
| 2097 |
+
},
|
| 2098 |
+
{
|
| 2099 |
+
"type": "text",
|
| 2100 |
+
"text": "Input:",
|
| 2101 |
+
"bbox": [
|
| 2102 |
+
188,
|
| 2103 |
+
592,
|
| 2104 |
+
228,
|
| 2105 |
+
605
|
| 2106 |
+
],
|
| 2107 |
+
"page_idx": 12
|
| 2108 |
+
},
|
| 2109 |
+
{
|
| 2110 |
+
"type": "text",
|
| 2111 |
+
"text": "{dialogue history with caption}",
|
| 2112 |
+
"bbox": [
|
| 2113 |
+
189,
|
| 2114 |
+
607,
|
| 2115 |
+
383,
|
| 2116 |
+
621
|
| 2117 |
+
],
|
| 2118 |
+
"page_idx": 12
|
| 2119 |
+
},
|
| 2120 |
+
{
|
| 2121 |
+
"type": "text",
|
| 2122 |
+
"text": "Output:",
|
| 2123 |
+
"bbox": [
|
| 2124 |
+
188,
|
| 2125 |
+
639,
|
| 2126 |
+
238,
|
| 2127 |
+
652
|
| 2128 |
+
],
|
| 2129 |
+
"page_idx": 12
|
| 2130 |
+
},
|
| 2131 |
+
{
|
| 2132 |
+
"type": "text",
|
| 2133 |
+
"text": "{response content}",
|
| 2134 |
+
"bbox": [
|
| 2135 |
+
189,
|
| 2136 |
+
655,
|
| 2137 |
+
307,
|
| 2138 |
+
669
|
| 2139 |
+
],
|
| 2140 |
+
"page_idx": 12
|
| 2141 |
+
},
|
| 2142 |
+
{
|
| 2143 |
+
"type": "text",
|
| 2144 |
+
"text": "{response caption}",
|
| 2145 |
+
"bbox": [
|
| 2146 |
+
189,
|
| 2147 |
+
671,
|
| 2148 |
+
307,
|
| 2149 |
+
684
|
| 2150 |
+
],
|
| 2151 |
+
"page_idx": 12
|
| 2152 |
+
},
|
| 2153 |
+
{
|
| 2154 |
+
"type": "footer",
|
| 2155 |
+
"text": "15021",
|
| 2156 |
+
"bbox": [
|
| 2157 |
+
477,
|
| 2158 |
+
927,
|
| 2159 |
+
522,
|
| 2160 |
+
940
|
| 2161 |
+
],
|
| 2162 |
+
"page_idx": 12
|
| 2163 |
+
},
|
| 2164 |
+
{
|
| 2165 |
+
"type": "page_number",
|
| 2166 |
+
"text": "13",
|
| 2167 |
+
"bbox": [
|
| 2168 |
+
490,
|
| 2169 |
+
941,
|
| 2170 |
+
507,
|
| 2171 |
+
954
|
| 2172 |
+
],
|
| 2173 |
+
"page_idx": 12
|
| 2174 |
+
},
|
| 2175 |
+
{
|
| 2176 |
+
"type": "text",
|
| 2177 |
+
"text": "B Prompt for Dialogue Generation without Captions",
|
| 2178 |
+
"text_level": 1,
|
| 2179 |
+
"bbox": [
|
| 2180 |
+
114,
|
| 2181 |
+
83,
|
| 2182 |
+
584,
|
| 2183 |
+
99
|
| 2184 |
+
],
|
| 2185 |
+
"page_idx": 13
|
| 2186 |
+
},
|
| 2187 |
+
{
|
| 2188 |
+
"type": "text",
|
| 2189 |
+
"text": "You are the last speaker in the following daily dialogue.",
|
| 2190 |
+
"bbox": [
|
| 2191 |
+
181,
|
| 2192 |
+
146,
|
| 2193 |
+
527,
|
| 2194 |
+
161
|
| 2195 |
+
],
|
| 2196 |
+
"page_idx": 13
|
| 2197 |
+
},
|
| 2198 |
+
{
|
| 2199 |
+
"type": "text",
|
| 2200 |
+
"text": "You MUST give a response depending on the dialogue history. You MUST keep response as short as possible.",
|
| 2201 |
+
"bbox": [
|
| 2202 |
+
181,
|
| 2203 |
+
162,
|
| 2204 |
+
794,
|
| 2205 |
+
193
|
| 2206 |
+
],
|
| 2207 |
+
"page_idx": 13
|
| 2208 |
+
},
|
| 2209 |
+
{
|
| 2210 |
+
"type": "text",
|
| 2211 |
+
"text": "Here is an example:",
|
| 2212 |
+
"bbox": [
|
| 2213 |
+
181,
|
| 2214 |
+
211,
|
| 2215 |
+
309,
|
| 2216 |
+
225
|
| 2217 |
+
],
|
| 2218 |
+
"page_idx": 13
|
| 2219 |
+
},
|
| 2220 |
+
{
|
| 2221 |
+
"type": "text",
|
| 2222 |
+
"text": "Input:",
|
| 2223 |
+
"bbox": [
|
| 2224 |
+
181,
|
| 2225 |
+
244,
|
| 2226 |
+
223,
|
| 2227 |
+
256
|
| 2228 |
+
],
|
| 2229 |
+
"page_idx": 13
|
| 2230 |
+
},
|
| 2231 |
+
{
|
| 2232 |
+
"type": "text",
|
| 2233 |
+
"text": "Speaker A: My specimen is deposited into the container in the room. Janice! You're not... gone?",
|
| 2234 |
+
"bbox": [
|
| 2235 |
+
181,
|
| 2236 |
+
259,
|
| 2237 |
+
771,
|
| 2238 |
+
274
|
| 2239 |
+
],
|
| 2240 |
+
"page_idx": 13
|
| 2241 |
+
},
|
| 2242 |
+
{
|
| 2243 |
+
"type": "text",
|
| 2244 |
+
"text": "Speaker B: Oh! Sid is still in his room. So did you do it? Did you make your deposit?",
|
| 2245 |
+
"bbox": [
|
| 2246 |
+
181,
|
| 2247 |
+
291,
|
| 2248 |
+
710,
|
| 2249 |
+
305
|
| 2250 |
+
],
|
| 2251 |
+
"page_idx": 13
|
| 2252 |
+
},
|
| 2253 |
+
{
|
| 2254 |
+
"type": "text",
|
| 2255 |
+
"text": "Speaker A: Yeah! yeah... The hard part is over!",
|
| 2256 |
+
"bbox": [
|
| 2257 |
+
181,
|
| 2258 |
+
323,
|
| 2259 |
+
473,
|
| 2260 |
+
338
|
| 2261 |
+
],
|
| 2262 |
+
"page_idx": 13
|
| 2263 |
+
},
|
| 2264 |
+
{
|
| 2265 |
+
"type": "text",
|
| 2266 |
+
"text": "Speaker B:",
|
| 2267 |
+
"bbox": [
|
| 2268 |
+
181,
|
| 2269 |
+
356,
|
| 2270 |
+
253,
|
| 2271 |
+
370
|
| 2272 |
+
],
|
| 2273 |
+
"page_idx": 13
|
| 2274 |
+
},
|
| 2275 |
+
{
|
| 2276 |
+
"type": "text",
|
| 2277 |
+
"text": "Output:",
|
| 2278 |
+
"bbox": [
|
| 2279 |
+
181,
|
| 2280 |
+
388,
|
| 2281 |
+
233,
|
| 2282 |
+
401
|
| 2283 |
+
],
|
| 2284 |
+
"page_idx": 13
|
| 2285 |
+
},
|
| 2286 |
+
{
|
| 2287 |
+
"type": "text",
|
| 2288 |
+
"text": "That's not the hard part honey! The hard part is what comes next, I mean aren't you worried about the results?",
|
| 2289 |
+
"bbox": [
|
| 2290 |
+
181,
|
| 2291 |
+
404,
|
| 2292 |
+
791,
|
| 2293 |
+
432
|
| 2294 |
+
],
|
| 2295 |
+
"page_idx": 13
|
| 2296 |
+
},
|
| 2297 |
+
{
|
| 2298 |
+
"type": "text",
|
| 2299 |
+
"text": "Input:",
|
| 2300 |
+
"bbox": [
|
| 2301 |
+
181,
|
| 2302 |
+
469,
|
| 2303 |
+
223,
|
| 2304 |
+
482
|
| 2305 |
+
],
|
| 2306 |
+
"page_idx": 13
|
| 2307 |
+
},
|
| 2308 |
+
{
|
| 2309 |
+
"type": "text",
|
| 2310 |
+
"text": "{dialogue history without caption}",
|
| 2311 |
+
"bbox": [
|
| 2312 |
+
184,
|
| 2313 |
+
483,
|
| 2314 |
+
398,
|
| 2315 |
+
499
|
| 2316 |
+
],
|
| 2317 |
+
"page_idx": 13
|
| 2318 |
+
},
|
| 2319 |
+
{
|
| 2320 |
+
"type": "text",
|
| 2321 |
+
"text": "Output:",
|
| 2322 |
+
"bbox": [
|
| 2323 |
+
181,
|
| 2324 |
+
517,
|
| 2325 |
+
233,
|
| 2326 |
+
530
|
| 2327 |
+
],
|
| 2328 |
+
"page_idx": 13
|
| 2329 |
+
},
|
| 2330 |
+
{
|
| 2331 |
+
"type": "text",
|
| 2332 |
+
"text": "{response content}",
|
| 2333 |
+
"bbox": [
|
| 2334 |
+
184,
|
| 2335 |
+
533,
|
| 2336 |
+
302,
|
| 2337 |
+
546
|
| 2338 |
+
],
|
| 2339 |
+
"page_idx": 13
|
| 2340 |
+
},
|
| 2341 |
+
{
|
| 2342 |
+
"type": "footer",
|
| 2343 |
+
"text": "15022",
|
| 2344 |
+
"bbox": [
|
| 2345 |
+
477,
|
| 2346 |
+
927,
|
| 2347 |
+
524,
|
| 2348 |
+
940
|
| 2349 |
+
],
|
| 2350 |
+
"page_idx": 13
|
| 2351 |
+
},
|
| 2352 |
+
{
|
| 2353 |
+
"type": "page_number",
|
| 2354 |
+
"text": "14",
|
| 2355 |
+
"bbox": [
|
| 2356 |
+
490,
|
| 2357 |
+
941,
|
| 2358 |
+
509,
|
| 2359 |
+
954
|
| 2360 |
+
],
|
| 2361 |
+
"page_idx": 13
|
| 2362 |
+
}
|
| 2363 |
+
]
|
2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/4a7b8b8a-ac90-4b97-8066-ba54fca5fe43_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/4a7b8b8a-ac90-4b97-8066-ba54fca5fe43_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67d0e1c85589c23c50a6113d5fa6a5d54664f6fffb4bafa55e3d0cef079d715f
|
| 3 |
+
size 787029
|
2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/full.md
ADDED
|
@@ -0,0 +1,397 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Talk With Human-like Agents: Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction
|
| 2 |
+
|
| 3 |
+
Haoqiu Yan\*1,3, Yongxin Zhu\*2,3, Kai Zheng1,3,
|
| 4 |
+
|
| 5 |
+
Bing Liu $^{4}$ , Haoyu Cao $^{4}$ , Deqiang Jiang $^{4}$ , Linli Xu $^{\dagger,3}$
|
| 6 |
+
|
| 7 |
+
$^{1}$ School of Computer Science and Technology, University of Science and Technology of China
|
| 8 |
+
|
| 9 |
+
$^{2}$ School of Data Science, University of Science and Technology of China
|
| 10 |
+
|
| 11 |
+
$^{3}$ State Key Laboratory of Cognitive Intelligence, $^{4}$ Tencent Youtu Lab
|
| 12 |
+
|
| 13 |
+
{yanhq,zyx2016,dthdzk} $@$ mail.ustc.edu.cn
|
| 14 |
+
|
| 15 |
+
{billbliu,rechycao,dqiangjiang}@tencent.com linlixu@ustc.edu.cn
|
| 16 |
+
|
| 17 |
+
# Abstract
|
| 18 |
+
|
| 19 |
+
Large Language Model (LLM)-enhanced agents become increasingly prevalent in Human-AI communication, offering vast potential from entertainment to professional domains. However, current multi-modal dialogue systems overlook the acoustic information present in speech, which is crucial for understanding human communication nuances. This oversight can lead to misinterpretations of speakers' intentions, resulting in inconsistent or even contradictory responses within dialogues. To bridge this gap, in this paper, we propose PerceptiveAgent, an empathetic multi-modal dialogue system designed to discern deeper or more subtle meanings beyond the literal interpretations of words through the integration of speech modality perception. Employing LLMs as a cognitive core, PerceptiveAgent perceives acoustic information from input speech and generates empathetic responses based on speaking styles described in natural language. Experimental results indicate that PerceptiveAgent excels in contextual understanding by accurately discerning the speakers' true intentions in scenarios where the linguistic meaning is either contrary to or inconsistent with the speaker's true feelings, producing more nuanced and expressive spoken dialogues. Code is publicly available at: https://github.com/Haoqiu-Yan/PerceptiveAgent.
|
| 20 |
+
|
| 21 |
+
# 1 Introduction
|
| 22 |
+
|
| 23 |
+
Artificial Intelligence (AI) agents (Russell and Norvig, 2010; Negnevitsky, 2005) are entities designed to replicate human-like intelligence and functionalities, serving as the essential building blocks of AI systems. An ideal agent should be capable of perceiving its environment with sensors, making informed decisions, and then taking actions in response to users or scenarios. Recently,
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Figure 1: Examples illustrating the definition of empathy within dialogues.
|
| 27 |
+
|
| 28 |
+
Large Language Models (LLMs) (Wei et al., 2022; Shanahan, 2024; Taylor et al., 2022) have exhibited remarkable capabilities in diverse tasks, offering opportunities for building general AI agents that engage in human-like interactions, such as virtual assistants and intelligent robots. However, current text-only dialogue systems (Peng et al., 2023; Touvron et al., 2023) fall short in bridging the gap between experimental and realistic scenarios, where humans perceive and understand the world through diverse multi-modal information. Thus, the integration of acoustic information into dialogues has the potential to foster the development of more human-like agents, thereby enhancing the empathetic experience they offer.
|
| 29 |
+
|
| 30 |
+
Empathetic responses involve two essential aspects: cognitive and affective empathy (Cuff et al., 2016; Kim et al., 2021; Reis et al., 2011; Smith, 2006), which reflect an understanding of the human-talker's thoughts and feelings respectively. Specifically, cognitive empathy involves understanding the human-talker's thoughts, perspectives, and described events, enabling the agent to provide responses relevant to the dialogue topic (Sabour et al., 2022). Conversely, affective empathy entails responding based on observed emotional expressions in the dialogue history, contributing to the nat
|
| 31 |
+
|
| 32 |
+
uralness of synthesized speech (Cong et al., 2021; Guo et al., 2021; Nishimura et al., 2022). While recent works (Saito et al., 2023; Nguyen et al., 2022; Mitsui et al., 2023) leverage LLM's strong capabilities of contextual understanding and content generation to synthesize empathetic speeches, there remains a discrepancy between cognitive and affective empathy. This arises because cognitive content is preassigned before affective speech is deduced from latent representations of multi-modal dialogue history.
|
| 33 |
+
|
| 34 |
+
Recently, advancements in multi-modal content perception and generation have been achieved by various methods (Zhang et al., 2023; Huang et al., 2024; Chen et al., 2023; Wu et al., 2023), where audio is represented as either recognized text with an automatic speech recognition model or discrete features with a speech encoder. However, while linguistic information in speech is predominantly captured by both discrete acoustic units and textual representations, acoustic features tend to be disregarded. This oversight can lead to misinterpretations of the speaker's intentions, resulting in discrepant or even contradictory responses within the dialogue history. As illustrated in Figure 1, the left scenario fails to consider the perspective of the listener while the right one barely understands or empathizes with the speaker's feelings.
|
| 35 |
+
|
| 36 |
+
In this paper, we propose PerceptiveAgent, an empathetic multi-modal dialogue system that can discern deeper or more subtle meanings beyond the literal interpretations of words, based on speaking styles described in natural language. Specifically, PerceptiveAgent first comprehends the speaker's intentions accurately by a perceptive captioner model that captures acoustic features from each speech within dialogues. Subsequently, an LLM module acts as the cognitive core, producing the relevant response content with a caption describing how to articulate the response. A Multi-Speaker and Multi-Attribute Synthesizer (MSMA-Synthesizer) is then developed to synthesize nuanced and expressive speech.
|
| 37 |
+
|
| 38 |
+
Our contributions include the following:
|
| 39 |
+
|
| 40 |
+
- We pioneer the construction of a speech captioner model to perceive and express acoustic information through natural language.
|
| 41 |
+
- We develop an empathetic multi-modal dialogue system capable of identifying the speaker's true intentions through audio modal
|
| 42 |
+
|
| 43 |
+
ity perception and generating empathetic speech.
|
| 44 |
+
|
| 45 |
+
- Experiments demonstrate that PerceptiveAgent can accurately discern the true intentions in scenarios where the literal interpretations of words are either contrary to or inconsistent with the speaker's true feelings.
|
| 46 |
+
|
| 47 |
+
# 2 Related Work
|
| 48 |
+
|
| 49 |
+
# 2.1 Multi-modal Dialogue Systems
|
| 50 |
+
|
| 51 |
+
Recent advances in multi-modal dialogue systems have primarily focused on transforming speech into discrete latent representation. For instance, Zhang et al. (2023); Chen et al. (2023); Wu et al. (2023) utilize speech encoders to perceive speech and then synthesize responses according to discrete acoustic units derived from LLMs, showing intrinsic cross-modal conversational abilities. Besides, works including (Nguyen et al., 2022; Mitsui et al., 2023) autonomously generate two-channel spoken dialogues, simulating realistic interactions between agents, including vocal interactions, laughter, and turn-taking. However, while discrete acoustic units capture linguistic information effectively, prosodic features are mostly ignored. To address this limitation and preserve prosodic information as much as possible, we develop a multi-modal dialog system that perceives prosody through speech captioning and responds empathetically using an LLM and a speech synthesizer.
|
| 52 |
+
|
| 53 |
+
# 2.2 Cross-Modal Text Generation
|
| 54 |
+
|
| 55 |
+
Cross-modal text generation involves generating text conditioned on other modalities such as audio and vision (Li et al., 2022; Liu et al., 2024; Zhang et al., 2024), where the key challenge is to align multi-modal features with the text latent space. Recent approaches (Zhu et al., 2023; Chen et al., 2023) address this challenge by aligning off-the-shelf pre-trained LLMs with learnable visual encoders (Li et al., 2023; Zhao et al., 2023), transforming multi-modal representations as learnable query embeddings while keeping both pre-trained LLMs and visual encoders frozen. Similarly, for audio caption tasks, audio embeddings are mapped to a sequence of prefix vectors and then taken as the context input for caption generation (Kim et al., 2023; Schaumlöffel et al., 2023; Xu et al., 2024). However, to the best of our knowledge, we are
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
Figure 2: The overall architecture of PerceptiveAgent. Three components are interconnected: the speech captioner, the LLM and the MSMA-Synthesizer. The speech captioner serves as a multi-modal sensory system, perceiving acoustic information from the dialogue history, which is crucial for discerning the speakers' intentions. The LLM acts as the cognitive core, responsible for comprehending the speakers' thoughts and emotions. Conditioned on the response contents and multiple attributes provided by the LLM, the MSMA-Synthesizer generates expressive speech outputs.
|
| 59 |
+
|
| 60 |
+
the first to construct a speech captioner capable of perceiving acoustic information in dialogues.
|
| 61 |
+
|
| 62 |
+
# 2.3 Expressive Text-to-Speech Synthesis
|
| 63 |
+
|
| 64 |
+
Given a transcript, text-to-speech (TTS) models achieve voice variability by conditioning on a zero-shot speech prompt or a text prompt of the desired style. For instance, zero-shot TTS systems reproduce the speaker characteristics and acoustic environments of a speech prompt through in-context learning (Wu et al., 2022; Wang et al., 2023; Shen et al., 2023; Le et al., 2023). However, these systems lack independent control over speaking styles, including prosody, emotion, and acoustic environment. To address this, text prompts have been introduced for more natural and general speech synthesis. Approaches like (Guo et al., 2023; Leng et al., 2023; Shimizu et al., 2023; Ji et al., 2023) express speaking styles in natural language, while methods such as (Polyak et al., 2021; Nguyen et al., 2023) utilize explicit labels to generate diverse speech that matches the prompt. We follow the latter direction and construct a speech synthesis model with multiple speaking style labels.
|
| 65 |
+
|
| 66 |
+
# 3 Methods
|
| 67 |
+
|
| 68 |
+
As a multimodal dialog system, PerceptiveAgent is capable of audio modality perception and empathetic speech generation, which is achieved through the incorporation of prosodic informa
|
| 69 |
+
|
| 70 |
+
tion expressed in natural language. To capture prosodic features from speech inputs, we propose a novel speech caption model that aligns audio features with the latent space of a pre-trained language model. To enhance empathy and diversity of the simulated speech communication, a multispeaker and multi-attribute vocoder is developed. This vocoder synthesizes speech by conditioning on both response contents and captions of speaking styles, resulting in more engaging and realistic dialogues.
|
| 71 |
+
|
| 72 |
+
# 3.1 Speech Captioner
|
| 73 |
+
|
| 74 |
+
The speech caption model is designed to capture prosodic information and transcribe it as textual descriptions. It operates by encoding speech inputs by the speech encoder in ImageBind (Girdhar et al., 2023), followed by description generation by the pre-trained GPT-2 decoder (Radford et al., 2019). To bridge the gap between the speech encoder and the text decoder, we introduce a Querying Transformer (Q-former) pre-trained in BuboGPT (Zhao et al., 2023). This model is connected with a linear projection layer, which is subsequently followed by a text decoder. To effectively fine-tune this model, we integrate the following two fine-tuning strategies, while keeping the speech encoder frozen throughout the training procedure.
|
| 75 |
+
|
| 76 |
+
# 3.1.1 Multi-modal Embedding Alignment
|
| 77 |
+
|
| 78 |
+
Prefix tuning is utilized to align the output of the Q-former with the latent space of the text decoder. A query vector with fixed dimensions is generated by the Q-former. These embeddings interact with each other through self-attention layers and with frozen audio features through cross-attention layers. To bridge the gap with the word embedding space, query embeddings are used as prefix vectors and attended to by the text decoder. This bottleneck architecture serves to compel the queries to extract the acoustic information that is most relevant to the textual descriptions.
|
| 79 |
+
|
| 80 |
+
# 3.1.2 Instruction Tuning
|
| 81 |
+
|
| 82 |
+
To bridge the gap between the next-word prediction objective of the pre-trained decoder and the objective of acquiring multi-modal information conditioned on prefix sequences, instruction tuning is employed to train the speech captioner. We first construct an instructional dataset, where each instance comprises three elements: a query vector, an instruction, and a caption. The instruction is described as a natural language text sequence that specifies the task, serving to constrain the model's outputs to align with desired response characteristics or domain knowledge. This provides a channel for humans to intervene with the model's behaviors. Varied instructions are gathered using GPT-3.5-Turbo in this work. Additionally, the caption represents the desired output following the instruction, while the query vector is derived from acoustic representations. Throughout the training procedure, the parameters of the speech encoder are fixed, while the Q-former and text decoder remain trainable. During each inference process, instructions are randomly selected and incorporated into the generated sequence to enhance diversity and simulate human cognitive processes more effectively, thereby yielding more varied outputs.
|
| 83 |
+
|
| 84 |
+
# 3.2 PerceptiveAgent
|
| 85 |
+
|
| 86 |
+
Figure 2 illustrates the overall framework of PerceptiveAgent, a multi-modal dialogue system comprising three interconnected stages: Intention Discerning by the speech captioner, Comprehension through Sensory Integration by the LLM and Expressive Speech Synthesis by the MSMA-Synthesizer. PerceptiveAgent exhibits two key characteristics: (1) It leverages natural language to perceive and express acoustic information, and (2) It employs an LLM as the cognitive core in
|
| 87 |
+
|
| 88 |
+
the system, to comprehend multi-modal contextual history and deliver audio responses.
|
| 89 |
+
|
| 90 |
+
# 3.2.1 Caption for Intention Discerning
|
| 91 |
+
|
| 92 |
+
In the initial stage, a speech caption model is employed to interpret acoustic information from audio inputs. Each speech within the dialogue history is encoded into latent features by a frozen speech encoder. These features are then compressed into a query vector with fixed dimensions, sharing the same latent space as the word embedding of a text decoder. Conditioned on this query sequence and instruction prompt, a textual caption describing the speaking styles for each speech is deduced by the text decoder.
|
| 93 |
+
|
| 94 |
+
# 3.2.2 Comprehension through Sensory Integration
|
| 95 |
+
|
| 96 |
+
Subsequently, an LLM module acting as the cognitive core is integrated into the system, where GPT-3.5-Turbo is employed. The transcribed textual content for each audio is merged with the previously generated caption before being fed into the LLM. Prompts in Appendix A and B are designed to effectively leverage the LLM's contextual understanding abilities. Upon recognizing speakers' intentions by assimilating both the contextual caption and content, the LLM deduces the relevant dialogue content and generates a caption describing how to articulate the derived content.
|
| 97 |
+
|
| 98 |
+
# 3.2.3 Expressive Speech Synthesis
|
| 99 |
+
|
| 100 |
+
Finally, empathetic audio responses are synthesized by the MSMA-Synthesizer, a Multi-Speaker and Multi-Attribute vocoder that is conditioned on the generated dialogue contents and captions. This vocoder is a modification of (Nguyen et al., 2023) to facilitate fine control over speech expressiveness. In addition to taking discrete speech units, speaker and style (emotion) as inputs, our vocoder introduces multiple prosodic attributes, including pitch, speed and energy. To synthesize each inference, the LLM's outputs of dialogue contents and captions are transformed into discrete units or attribute labels respectively, before being fed into the vocoder. Specifically, a text-to-unit (T2U) model is utilized to convert response contents into acoustic units with a Transformer machine translation structure (Vaswani et al., 2017). Emotional and prosodic labels are recognized from response captions by sentence classifiers, accomplished with GPT-3.5-Turbo in this work, while the speaker label is randomly selected.
|
| 101 |
+
|
| 102 |
+
The architecture of the vocoder comprises a speaker embedder, an attribute embedder and a HIFIGAN vocoder. The speaker embedder uses look-up tables to embed speaker identities, while a set of controllable attributes including speed, emotion, energy and pitch are embedded by the attribute embedder. To synthesize expressive speech, discrete units are initially embedded and up-sampled through a series of blocks consisting of transposed convolution and a residual block with dilated layers. Prior to duration prediction, this up-sampled sequence is concatenated with the speed embedding. The speaker embedding and style embedding are subsequently concatenated to each frame in the up-sampled sequence, which is transformed to a mel-spectrogram by the HiFiGAN generator.
|
| 103 |
+
|
| 104 |
+
# 4 Experiments
|
| 105 |
+
|
| 106 |
+
# 4.1 Experimental Setup
|
| 107 |
+
|
| 108 |
+
Datasets. We train our speech captioner on the TextTrolSpeech (Ji et al., 2023) dataset, which consists of 236,220 pairs of captions and the corresponding speech samples. The captions in this dataset describe speaking styles in terms of five factors: gender, emotion, pitch, speed and energy.
|
| 109 |
+
|
| 110 |
+
For the MSMA-Synthesizer, we reproduce a vocoder proposed in (Nguyen et al., 2023) using the EXPRESSO, LJSpeech (Ito and Johnson, 2017) and VCTK (Yamagishi et al., 2019) datasets. The EXPRESSO dataset is subsequently labeled by the speech captioner and GPT-3.5-Turbo to recognize attributes of pitch, speed and energy for each speech. We then utilize this labeled EXPRESSO dataset and the reproduced vocoder to train the MSMA-Synthesizer. We refer to the reading and conversation sections of EXPRESSO as Exp-R and Exp-I respectively. Additionally, a T2U model is trained on the same datasets with the MSMA-Synthesizer to maintain consistency in unit distribution.
|
| 111 |
+
|
| 112 |
+
To evaluate the overall performance of our system, we utilize a speech dialogue dataset from MELD (Poria et al., 2019). This dataset provides emotion labels for each sentence, which serve as ground truth labels for both response content and speech evaluation. The speeches in this dataset are recorded in realistic scenes with interruptions and environmental noise. In our evaluation, we only consider conversations with two speakers.
|
| 113 |
+
|
| 114 |
+
We utilize English datasets throughout the entire training process. As a consequence, Percep
|
| 115 |
+
|
| 116 |
+
tiveAgent currently supports only the English language. However, it is noteworthy that PerceptiveAgent can be readily expanded to accommodate multiple languages. Only the MSMA-Synthesizer module requires modification, as the language-agnostic nature of the speech captioner allows it to generate captions from various languages. Meanwhile, existing methods can recognize semantic contents and translate them into English.
|
| 117 |
+
|
| 118 |
+
Configurations. We utilize the speech encoder in ImageBind (Girdhar et al., 2023), the pre-trained Q-former in BuboGPT (Zhao et al., 2023), and the pre-trained GPT-2 (Radford et al., 2019) to implement the speech captioner. Finetuning is conducted for 43,000 steps with a batch size of 16. For decoding, we use Top-k sampling with $k = 10$ and set the minimum and maximum sequence lengths to 20 and 50, respectively. We reproduce the vocoder for 400,000 steps with a batch size of 32 and learning rate of 0.0004, and train the MSMA-Synthesizer for 200,000 steps with a batch size of 32 and learning rate of 0.0004. The T2U model is structured as a sequence-to-sequence transformer with 4 encoder layers, 4 decoder layers, and 4 attention heads, with a dropout of 0.1. We utilize HuBERT (Hsu et al., 2021) with 2000 clusters to acquire units as targets<sup>1</sup>, provided by the textlesslib toolbox (Kharitonov et al., 2022). Decoding is performed using Topk sampling with $k = 10$ . All experiments are conducted on 4 NVIDIA GeForce RTX 4090 GPUs.
|
| 119 |
+
|
| 120 |
+
# 4.2 Evaluation
|
| 121 |
+
|
| 122 |
+
Speech-GPT3.5. We implement Speech-GPT3.5, a dialogue system focusing solely on linguistic information as a baseline. According to the textual history content recognized from the speech input, this system comprehends dialogue context with GPT-3.5-Turbo. After generating the response content, the audio response is synthesized by an off-the-shelf TTS (text-to-speech) model provided by OpenAI<sup>2</sup>.
|
| 123 |
+
|
| 124 |
+
Metrics. The performance of PerceptiveAgent is evaluated in terms of two fundamental aspects: 1) cognitive empathy demonstrates the ability to consider the perspective of speakers, reflected in the content of the response; and 2) affective empathy exhibits the ability to emotionally understand and share the speaker's feelings, reflected in the
|
| 125 |
+
|
| 126 |
+
<table><tr><td></td><td>BERTScore</td><td>Accuracy</td></tr><tr><td>Speech-GPT3.5</td><td>53.03±10.20</td><td>0.74</td></tr><tr><td>PerceptiveAgent</td><td>54.36±9.25</td><td>21.89</td></tr><tr><td>-w/o captions</td><td>-</td><td>16.53</td></tr></table>
|
| 127 |
+
|
| 128 |
+
Table 1: Performance evaluation of PerceptiveAgent. BERTScore $(\%)$ measures the quality of cognitive empathy in linguistic contents, while accuracy $(\%)$ assesses the quality of affective empathy in acoustic responses.
|
| 129 |
+
|
| 130 |
+
prosody of the generated audio response. Cognitive and affective empathy are assessed by evaluating the quality of generated textual responses and audio responses, respectively.
|
| 131 |
+
|
| 132 |
+
To evaluate the quality of dialogue text generation, we employ the BERTScore automatic evaluation metric proposed by Zhang et al. (2020), which computes a similarity score for each token in the candidate sentence with each token in the reference sentence. To evaluate the expressiveness of audio generation, we employ an expressive style classifier proposed by Nguyen et al. (2023) to recognize emotion labels for both generated and true speeches. Classification accuracy is used to measure the performance.
|
| 133 |
+
|
| 134 |
+
Besides, we evaluate the perception ability of the speech captioner on the validation and test datasets, which are split from the TextrolSpeech dataset. We approach this model as a multi-attribute classification task. Upon generating captions from speeches, the predicted labels for attributes including gender, emotion, pitch, speed and energy are determined by a sentence classifier, GPT-3.5-Turbo, while the true labels are provided in the TextrolSpeech dataset. Weighted metrics including precision, recall and F1-score are used to quantify the disparity between the predicted and true labels.
|
| 135 |
+
|
| 136 |
+
Moreover, the expressiveness of the speech synthesizer is assessed on the validation and test datasets split from the EXPRESSO dataset. We use the same expressive style classifier employed in affective empathy evaluation, to measure the preservation of emotion in the resynthesized speech. For evaluating the preservation of prosody, we compute the F0 Frame Error (FFE), which measures the percentage of frames with a deviation of more than $20\%$ in pitch value between the input and resynthesized output.
|
| 137 |
+
|
| 138 |
+
# 4.3 Result Analysis
|
| 139 |
+
|
| 140 |
+
# 4.3.1 PerceptiveAgent
|
| 141 |
+
|
| 142 |
+
Table 1 presents the overall performance of PerceptiveAgent on cognitive empathy and affective empathy, evaluated on the generated content and audio, respectively. BERTScore measures the semantic similarity between the generated and real response contents, while accuracy assesses the similarity and diversity of emotions between the generated and real speeches. Overall, compared to Speech-GPT3.5, PerceptiveAgent demonstrates a strong ability in generating empathetic responses with a closer alignment to the dialogue context in terms of linguistic content and a higher expressiveness in acoustic information. Specifically, PerceptiveAgent achieves a slightly higher BERTScore than Speech-GPT3.5, primarily because our model can generate content that more accurately captures the speaker's intentions and contains more emotionally intense words. Additionally, PerceptiveAgent notably outperforms Speech-GPT3.5 in terms of accuracy, as the latter doesn't incorporate any emotion prompts during speech generation, thus maintaining a limited variety of prosody. Despite this, the accuracy of PerceptiveAgent still remains at a relatively moderate level. This is because the generated responses, while contextually appropriate, may not entirely align with the real responses in terms of semantics and emotions.
|
| 143 |
+
|
| 144 |
+
# 4.3.2 Speech Captioner
|
| 145 |
+
|
| 146 |
+
Table 2 evaluates the speech captioner's generalization performance on both the validation and test sets. Overall, it is evident that that the model achieves the highest F1-score for gender, followed by pitch and emotion. This underscores the model's proficiency in accurately discerning these attributes from input speech. Besides, both gender and emotion exhibit closely aligned precision and recall metrics, affirming the model's predictive prowess for these attributes. Meanwhile, there exists a notable disparity between precision and recall when predicting energy, indicating variable performance and a tendency towards confident predictions. Conversely, the model's performance in predicting speed is unsatisfactory, which can be attributed to the imbalanced distribution of speed in the training dataset, with over $60\%$ of samples labeled as "neutral".
|
| 147 |
+
|
| 148 |
+
We also discuss how errors in speech processing are affected by demographics of the speakers. Ta
|
| 149 |
+
|
| 150 |
+
<table><tr><td rowspan="2">Attribute</td><td colspan="3">Validation</td><td colspan="3">Test</td></tr><tr><td>Precision</td><td>Recall</td><td>F1-score</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>Gender</td><td>99.3</td><td>97.5</td><td>98.4</td><td>99.3</td><td>98.6</td><td>99.0</td></tr><tr><td>Emotion</td><td>85.8</td><td>85.4</td><td>85.1</td><td>87.3</td><td>87.1</td><td>86.8</td></tr><tr><td>Pitch</td><td>85.6</td><td>76.8</td><td>80.4</td><td>79.6</td><td>72.1</td><td>75.3</td></tr><tr><td>Energy</td><td>72.4</td><td>57.4</td><td>63.1</td><td>77.7</td><td>65.3</td><td>69.9</td></tr><tr><td>Speed</td><td>47.2</td><td>36.7</td><td>41.3</td><td>48.5</td><td>41.5</td><td>44.7</td></tr></table>
|
| 151 |
+
|
| 152 |
+
Table 2: Performance evaluation of the speech captioner. Precision, recall and F1-score $(\%)$ are utilized to measure its generalization ability on both the validation and test sets. Predicted labels are obtained through semantic classification on the generated captions, while the true labels are derived from the TextroSpeech dataset.
|
| 153 |
+
|
| 154 |
+
<table><tr><td rowspan="2">Attribute</td><td colspan="3">Male</td><td colspan="3">Female</td></tr><tr><td>Precision</td><td>Recall</td><td>F1-score</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>Emotion</td><td>84.3</td><td>85.4</td><td>84.2</td><td>87.4</td><td>85.5</td><td>86.0</td></tr><tr><td>Pitch</td><td>88.2</td><td>82.8</td><td>85.3</td><td>84.8</td><td>71.0</td><td>75.9</td></tr><tr><td>Energy</td><td>74.4</td><td>60.0</td><td>65.0</td><td>71.2</td><td>54.9</td><td>60.9</td></tr><tr><td>Speed</td><td>46.4</td><td>43.1</td><td>44.6</td><td>48.0</td><td>30.6</td><td>37.3</td></tr></table>
|
| 155 |
+
|
| 156 |
+
Table 3: Comparison of the speech captioner's performance across genders.
|
| 157 |
+
|
| 158 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Accuracy</td><td>FFE</td></tr><tr><td>Exp-R</td><td>Exp-I</td><td>Exp</td></tr><tr><td>GT</td><td>91.9</td><td>75.1</td><td>-</td></tr><tr><td>EXPRESSO</td><td>87.9</td><td>67.0</td><td>0.17±0.12</td></tr><tr><td>MSMA</td><td>83.8</td><td>70.8</td><td>0.39±0.16</td></tr></table>
|
| 159 |
+
|
| 160 |
+
Table 4: Preservation evaluation of MSMA-Synthesizer. Accuracy (\%) is evaluated on EXPRESSO read (Exp-R) and conversation (Exp-I) dataset. F0 Frame Error (FFE) is calculated on EXPRESSO (Exp). GT represents the results of automatic metrics calculated on real audio. EXPRESSO and MSMA refer to the synthesizers in EXPRESSO and PerceptiveAgent respectively.
|
| 161 |
+
|
| 162 |
+
ble 3 compares the performance of the speech captioner across genders, which represents the most prevalent factor. The F1-score on male speech surpasses that on female speech in terms of pitch, energy and speed, despite the comparable sample sizes for male and female groups (8634 VS. 8983). This demonstrates a variation in the model's performance depending on the gender of the speakers.
|
| 163 |
+
|
| 164 |
+
# 4.3.3 MSMA-Synthesizer
|
| 165 |
+
|
| 166 |
+
Table 4 assesses the MSMA-Synthesizer's ability to preserve emotion and prosody features on the test set, where the EXPRESSO Synthesizer is reproduced by us. The "GT" method represents the results of automatic metrics calculated on real audio. Clearly, the MSMA-Synthesizer achieves
|
| 167 |
+
|
| 168 |
+
higher accuracy on the read dataset compared to EXPRESSO. This suggests that an integration of multiple attributes into speech synthesis can more effectively enable the model to synthesize emotionally expressive audio in dialogue scenarios, meeting the requirements of our system. However, there is a decrease in accuracy on the Exp-R dataset, which is relevant to the less apparent variation in prosody with emotional transitions. Additionally, in terms of FFE, it can be observed that incorporating multiple attributes into the MSMA-Synthesizer may lead to some degradation in speech synthesis quality. However this degradation remains within an acceptable range.
|
| 169 |
+
|
| 170 |
+
# 4.3.4 Ablation Study
|
| 171 |
+
|
| 172 |
+
Effectiveness of Captions. The last line in Table 1 demonstrates the effectiveness of captions in PerceptiveAgent. The system without captions synthesizes speech using randomly selected labels for all four speaking attributes (pitch, speed, energy, and emotion), while maintaining the same response contents as the PerceptiveAgent. It is evident that the PerceptiveAgent outperforms the system without captions, highlighting the effectiveness of captions in generating speech with affective empathy.
|
| 173 |
+
|
| 174 |
+
Effectiveness of Style Factors. To discern the discrete impact of distinct speaking style factors, we conduct an ablation experiment by systematically
|
| 175 |
+
|
| 176 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Accuracy</td><td>FFE</td></tr><tr><td>Exp-R</td><td>Exp-I</td><td>Exp</td></tr><tr><td>GT</td><td>91.9</td><td>75.1</td><td>-</td></tr><tr><td>EXPRESSO</td><td>87.9</td><td>67.0</td><td>0.17±0.12</td></tr><tr><td>MSMA</td><td>83.8</td><td>70.8</td><td>0.39±0.16</td></tr><tr><td>-style</td><td>82.2</td><td>69.0</td><td>0.40±0.16</td></tr><tr><td>-speed</td><td>31.8</td><td>9.2</td><td>0.44±0.13</td></tr><tr><td>-energy</td><td>31.0</td><td>9.1</td><td>0.44±0.13</td></tr><tr><td>-gender</td><td>30.8</td><td>8.7</td><td>0.44±0.13</td></tr><tr><td>-pitch</td><td>30.7</td><td>7.4</td><td>0.43±0.13</td></tr></table>
|
| 177 |
+
|
| 178 |
+
Table 5: Performance of the MSMA-Synthesizer conditioned on single speaking style factors.
|
| 179 |
+
|
| 180 |
+
varying each factor while maintaining the others at their default values. Table 5 presents that the model with style remained achieves the highest accuracy and the lowest FFE, while the models with the other factors exhibit similar performance. This underscores the predominant contribution of style to the effectiveness of expressive speech synthesis.
|
| 181 |
+
|
| 182 |
+
# 5 Case Study
|
| 183 |
+
|
| 184 |
+
Figure 3 presents two cases comparing the response quality between Speech-GPT3.5 and PerceptiveAgent. It demonstrates that by explicitly incorporating acoustic information through captions, the LLM can more accurately comprehend the speaker's intentions and generate more accurate and contextually appropriate responses. The first and second examples illustrate scenarios where the speaker's intention either contradicts or aligns with the linguistic contents, respectively.
|
| 185 |
+
|
| 186 |
+
The first example in Figure 3 (a) depicts an unplanned meeting conversation between two friends. Analyzing solely from the textual contents, it is suggested that the speaker B is extremely excited and delighted about this conversation. However, a closer examination of the key words of "lower vocal" and "subbed energy" in speaker B's caption reveals an evasive attitude towards the situation. Consequently, when confronted with speaker A's question, "Were you here waiting for me?", it can be inferred that speaker B is not inclined to engage in extensive conversation. The absence of nuanced captions poses a challenge for SpeechGPT3.5, leading to a misinterpretation and generating a response that implies a strong desire to continue the conversation. In contrast, PerceptiveAgent provides a response in accordance with the underlying meaning. Therefore, despite po
|
| 187 |
+
|
| 188 |
+
tential inconsistencies between linguistic contents and speaker intentions disrupting the accuracy of dialogue context understanding, PerceptiveAgent, with the aid of captions, can effectively capture the speaker's intent by correctly discerning the acoustic information of speech.
|
| 189 |
+
|
| 190 |
+
In the second example in Figure 3 (b), the speaker A receives a paper from his mother and intends to share it with his friends. It can be inferred that he is highly excited at the moment, as evidenced by the key words "treble tone" and "energetically" in the caption. Recognizing speaker A's excited mood, the response generated by Perceptive Agent mirrors the same enthusiasm and curiosity, aligning well with the ground truth. However, Speech-GPT3.5 fails to perceive speaker A's excitement and merely raises the question in a bland manner. Thus, in scenarios where the textual contents coincide with the speaker's intent, our model can also provide responses that correspond to the context of the conversation.
|
| 191 |
+
|
| 192 |
+
# 6 Conclusion
|
| 193 |
+
|
| 194 |
+
In this paper, we propose PerceptiveAgent, an empathetic multi-modal dialogue system capable of accurately discerning the speaker's intentions through the integration of perceptive speech captions and to respond with nuanced and expressive spoken dialogues. Specifically, PerceptiveAgent comprises three cascaded modules: a speech captioner for intention discernment, an LLM for comprehension through sensory integration, and an MSMA-Synthesizer for expressive speech synthesis. Initially, the system employs a perceptive captioner model to capture acoustic features from each speech within dialogues. Subsequently, an LLM module serves as the cognitive core, generating relevant response content with a caption conditioned on the comprehension of the speaker's intentions. An MSMA-Synthesizer is then developed to synthesize expressive speech. Experimental results indicate PerceptiveAgent's strong ability in empathetic response generation, closely aligning with the dialogue context in terms of linguistic contents and exhibiting high expressiveness in acoustic information. Additionally, a case study demonstrates PerceptiveAgent's capability to accurately identify the speaker's intentions in scenarios where the literal interpretations of words are either contrary to or inconsistent with the speaker's true feelings.
|
| 195 |
+
|
| 196 |
+
# Input
|
| 197 |
+
|
| 198 |
+
Speaker A: Are you being British?!
|
| 199 |
+
|
| 200 |
+
Caption: In a hushed voice, she speaks rapidly. Speaking softly, she maintains a high-pitched voice.
|
| 201 |
+
|
| 202 |
+
Speaker B: No. Not anymore.
|
| 203 |
+
|
| 204 |
+
Caption: Employing a lower vocal quality and customary speaking speed, his expression radiates subdued energy.
|
| 205 |
+
|
| 206 |
+
Speaker A: Why are you shopping here? You don't live in this neighborhood. Were you here waiting for me?
|
| 207 |
+
|
| 208 |
+
Caption: Her subdued energy doesn't affect the high pitch of her voice at a regular speed.
|
| 209 |
+
|
| 210 |
+
# Response
|
| 211 |
+
|
| 212 |
+
Ground Truth
|
| 213 |
+
|
| 214 |
+
Speaker B: Yeah, huh. I'm just uh, you know I'm just picking up some things for a party.
|
| 215 |
+
Caption: The boy's low energy is evident in his fast, low-pitched speech.
|
| 216 |
+
|
| 217 |
+
Speech-GPT3.5
|
| 218 |
+
|
| 219 |
+
Speaker B: Yes, I was actually. I needed to talk to you about something important.
|
| 220 |
+
|
| 221 |
+
PerceptiveAgent
|
| 222 |
+
|
| 223 |
+
Speaker B: I needed to pick up a few things, and I happened to be in the area. That's all.
|
| 224 |
+
|
| 225 |
+
Caption: Speaking with a low energy level, his slow speaking pace conveys a sense of calmness and control.
|
| 226 |
+
|
| 227 |
+
# (a) Contradictory Example
|
| 228 |
+
|
| 229 |
+
# Input
|
| 230 |
+
|
| 231 |
+
Speaker A: Hey guys check it out! My mom sent me the paper!
|
| 232 |
+
|
| 233 |
+
Caption: With a treble tone, she converses naturally and energetically. Her speaking rate is normal, despite her high-pitched voice.
|
| 234 |
+
|
| 235 |
+
# Response
|
| 236 |
+
|
| 237 |
+
Ground Truth
|
| 238 |
+
|
| 239 |
+
Speaker B: Ooh, let's see it!
|
| 240 |
+
|
| 241 |
+
Caption: Speaking with a treble, she maintains a regular conversational pace. Speaking with normal energy, she maintains a treble tone.
|
| 242 |
+
|
| 243 |
+
Speech-GPT3.5
|
| 244 |
+
|
| 245 |
+
Speaker B: What does it say?
|
| 246 |
+
|
| 247 |
+
PerceptiveAgent
|
| 248 |
+
|
| 249 |
+
Speaker B: That's great! Let's take a look at it together.
|
| 250 |
+
|
| 251 |
+
Caption: A speaker with a low tone and normal speed, who is energetically speaking to others and taking interest in the conversation.
|
| 252 |
+
|
| 253 |
+
# (b) Consistent Example
|
| 254 |
+
|
| 255 |
+
Figure 3: Cases comparing the response quality between Speech-GPT3.5 and PerceptiveAgent.
|
| 256 |
+
|
| 257 |
+
# 7 Limitations
|
| 258 |
+
|
| 259 |
+
Although PerceptiveAgent excels at providing empathetic responses in terms of both linguistic and acoustic contents, several limitations can be observed in this system: 1) Dataset Limitation: PerceptiveAgent's perception ability is currently constrained by the comprehensiveness of the training dataset in describing speech information. Presently, it is unable to discern speaker identity and background noise from speech; 2) Time Delay Limitation: PerceptiveAgent is a system cascaded by three interconnected components, which introduces accumulated delays to the response time, and 3) Length Limitation: The maximum token length in LLMs may limit the multi-turn dialogue.
|
| 260 |
+
|
| 261 |
+
# Acknowledgements
|
| 262 |
+
|
| 263 |
+
This research was supported by the National Natural Science Foundation of China (Grant No. 62276245).
|
| 264 |
+
|
| 265 |
+
# References
|
| 266 |
+
|
| 267 |
+
Feilong Chen, Minglun Han, Haozhi Zhao, Qingyang Zhang, Jing Shi, Shuang Xu, and Bo Xu. 2023. X-LLM: bootstrapping advanced large language models by treating multi-modalities as foreign languages. CoRR, abs/2305.04160.
|
| 268 |
+
|
| 269 |
+
Jian Cong, Shan Yang, Na Hu, Guangzhi Li, Lei Xie, and Dan Su. 2021. Controllable context-aware conversational speech synthesis. In Interspeech, pages 4658-4662. ISCA.
|
| 270 |
+
|
| 271 |
+
Benjamin MP Cuff, Sarah J Brown, Laura Taylor, and Douglas J Howat. 2016. Empathy: A review of the concept. Emotion review, 8(2):144-153.
|
| 272 |
+
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. 2023. Imagebind one embedding space to bind them all. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15180-15190. IEEE.
|
| 273 |
+
Haohan Guo, Shaofei Zhang, Frank K. Soong, Lei He, and Lei Xie. 2021. Conversational end-to-end TTS for voice agents. In IEEE Spoken Language Technology Workshop, pages 403-409. IEEE.
|
| 274 |
+
Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, and Xu Tan. 2023. Prompts: Controllable text-to-speech with text descriptions. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.
|
| 275 |
+
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451-3460.
|
| 276 |
+
Rongjie Huang, Mingze Li, Dongchao Yang, Jia-tong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Yuexian Zou, Zhou Zhao, and Shinji Watanabe. 2024. Audiogpt: Understanding and generating speech, music, sound, and talking head. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 23802-23804. AAAI Press.
|
| 277 |
+
Keith Ito and Linda Johnson. 2017. The LJ speech dataset. https://keithito.com/LJ-Speech-Dataset/.
|
| 278 |
+
Shengpeng Ji, Jialong Zuo, Minghui Fang, Ziyue Jiang, Feiyang Chen, Xinyu Duan, Baoxing Huai, and Zhou Zhao. 2023. Textrolspeech: A text style control speech corpus with codec language text-to-speech models. CoRR, abs/2308.14430.
|
| 279 |
+
Eugene Kharitonov, Jade Copet, Kushal Lakhotia, Tu Anh Nguyen, Paden Tomasello, Ann Lee, Ali Elkahky, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux, and Yossi Adi. 2022. textless: a library for textless spoken language processing. CoRR, abs/2202.07359.
|
| 280 |
+
Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2021. Perspective-taking and pragmatics for generating empathetic responses focused on emotion causes. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2227-2240. Association for Computational Linguistics.
|
| 281 |
+
Minkyu Kim, Kim Sung-Bin, and Tae-Hyun Oh. 2023. Prefix tuning for automated audio captioning.
|
| 282 |
+
|
| 283 |
+
In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.
|
| 284 |
+
Matthew Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, and Wei-Ning Hsu. 2023. Voicebox: Text-guided multilingual universal speech generation at scale. In Advances in Neural Information Processing Systems.
|
| 285 |
+
Yichong Leng, Zhifang Guo, Kai Shen, Xu Tan, Zeqian Ju, Yanqing Liu, Yufei Liu, Dongchao Yang, Leying Zhang, Kaitao Song, Lei He, Xiang-Yang Li, Sheng Zhao, Tao Qin, and Jiang Bian. 2023. Promptts 2: Describing and generating voices with text prompt. CoRR, abs/2309.02285.
|
| 286 |
+
Junnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi. 2023. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 19730-19742. PMLR.
|
| 287 |
+
Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. 2022. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 12888-12900. PMLR.
|
| 288 |
+
Chaohu Liu, Kun Yin, Haoyu Cao, Xinghua Jiang, Xin Li, Yinsong Liu, Deqiang Jiang, Xing Sun, and Linli Xu. 2024. HRVDA: high-resolution visual document assistant. CoRR, abs/2404.06918.
|
| 289 |
+
Kentaro Mitsui, Yukiya Hono, and Kei Sawada. 2023. Towards human-like spoken dialogue generation between AI agents from written dialogue. CoRR, abs/2310.01088.
|
| 290 |
+
Michael Negnevitsky. 2005. Artificial intelligence: a guide to intelligent systems. Pearson education.
|
| 291 |
+
Tu Anh Nguyen, Wei-Ning Hsu, Antony D'Avirro, Bowen Shi, Itai Gat, Maryam Fazel-Zarandi, Tal Remez, Jade Copet, Gabriel Synnaeve, Michael Hassid, Felix Kreuk, Yossi Adi, and Emmanuel Dupoux. 2023. EXPRESSO: A benchmark and analysis of discrete expressive speech resynthesis. CoRR, abs/2308.05725.
|
| 292 |
+
Tu Anh Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei-Ning Hsu, Ali Elkahky, Paden Tomasello, Robin Algayres, Benoit Sagot, Abdelrahman Mohamed, and Emmanuel Dupoux. 2022. Generative spoken dialogue language modeling. CoRR, abs/2203.16502.
|
| 293 |
+
Yuto Nishimura, Yuki Saito, Shinnosuke Takamichi, Kentaro Tachibana, and Hiroshi Saruwatari. 2022. Acoustic modeling for end-to-end empathetic dialogue speech synthesis using linguistic and prosodic
|
| 294 |
+
|
| 295 |
+
contexts of dialogue history. In Interspeech, pages 3373-3377. ISCA.
|
| 296 |
+
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with GPT-4. CoRR, abs/2304.03277.
|
| 297 |
+
Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled self-supervised representations. In *Interspeech*, pages 3615-3619. ISCA.
|
| 298 |
+
Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 527-536. Association for Computational Linguistics.
|
| 299 |
+
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
|
| 300 |
+
Harry T Reis, Michael R Maniaci, Peter A Caprariello, Paul W Eastwick, and Eli J Finkel. 2011. Familiarity does indeed promote attraction in live interaction. Journal of personality and social psychology, 101(3):557.
|
| 301 |
+
Stuart J Russell and Peter Norvig. 2010. Artificial intelligence a modern approach. London.
|
| 302 |
+
Sahand Sabour, Chujie Zheng, and Minlie Huang. 2022. CEM: commonsense-aware empathetic response generation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 11229-11237. AAAI Press.
|
| 303 |
+
Yuki Saito, Shinnosuke Takamichi, Eiji Ilimori, Kentaro Tachibana, and Hiroshi Saruwatari. 2023. Chatgptedss: Empathetic dialogue speech synthesis trained from chatgpt-derived context word embeddings. CoRR, abs/2305.13724.
|
| 304 |
+
Timothy Schaumlöffel, Martina G Vilas, and Gemma Roig. 2023. Peacs: Prefix encoding for auditory caption synthesis. In Proceedings of the Detection and Classification of Acoustic. Scenes Events Challenge, pages 1-3.
|
| 305 |
+
Murray Shanahan. 2024. Talking about large language models. Commun. ACM, 67(2):68-79.
|
| 306 |
+
Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yichong Leng, Lei He, Tao Qin, Sheng Zhao, and Jiang Bian. 2023. Naturalspeech 2: Latent diffusion models are natural and zero-shot speech and singing synthesizers. CoRR, abs/2304.09116.
|
| 307 |
+
Reo Shimizu, Ryuichi Yamamoto, Masaya Kawamura, Yuma Shirahata, Hironori Doi, Tatsuya Komatsu, and Kentaro Tachibana. 2023. Promptts++: Controlling speaker identity in prompt-based text-to-speech using natural language descriptions. CoRR, abs/2309.08140.
|
| 308 |
+
|
| 309 |
+
Adam Smith. 2006. Cognitive empathy and emotional empathy in human behavior and evolution. *The Psychological Record*, 56(1):3-21.
|
| 310 |
+
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for science. CoRR, abs/2211.09085.
|
| 311 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinez, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. CoRR, abs/2307.09288.
|
| 312 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.
|
| 313 |
+
Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. Neural codec language models are zero-shot text to speech synthesizers. CoRR, abs/2301.02111.
|
| 314 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, pages 24824-24837.
|
| 315 |
+
Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, and TatSeng Chua. 2023. Next-gpt: Any-to-any multimodal LLM. CoRR, abs/2309.05519.
|
| 316 |
+
Yihan Wu, Xu Tan, Bohan Li, Lei He, Sheng Zhao, Ruihua Song, Tao Qin, and Tie-Yan Liu. 2022. Adaspeech 4: Adaptive text to speech in zero-shot scenarios. In *Interspeech*, pages 2568-2572. ISCA.
|
| 317 |
+
Yaoxun Xu, Hangting Chen, Jianwei Yu, Qiaochu Huang, Zhiyong Wu, Shi-Xiong Zhang, Guangzhi
|
| 318 |
+
|
| 319 |
+
Li, Yi Luo, and Rongzhi Gu. 2024. Secap: Speech emotion captioning with large language model. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19323-19331. AAAI Press.
|
| 320 |
+
Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald. 2019. Cstr vctk corpus: English multispeaker corpus for cstr voice cloning toolkit (version 0.92). https://doi.org/10.7488/ds/2645.
|
| 321 |
+
Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. 2023. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15757-15773. Association for Computational Linguistics.
|
| 322 |
+
Fang Zhang, Yongxin Zhu, Xiangxiang Wang, Huang Chen, Xing Sun, and Linli Xu. 2024. Visual hallucination elevates speech recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19542-19550. AAAI Press.
|
| 323 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In International Conference on Learning Representations. OpenReview.net.
|
| 324 |
+
Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, and Bingyi Kang. 2023. Bubogpt: Enabling visual grounding in multi-modal llms. CoRR, abs/2307.08581.
|
| 325 |
+
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. CoRR, abs/2304.10592.
|
| 326 |
+
|
| 327 |
+
# A Prompt for Dialogue Generation with Captions
|
| 328 |
+
|
| 329 |
+
You are the last speaker in the following daily dialogue.
|
| 330 |
+
|
| 331 |
+
Each speaker is provided with a speaking style caption after the conversation, which contains gender, speaking speed, pitch, energy and emotion. You MUST give a response FIRST depending on the dialogue history, followed by a speaking style caption.
|
| 332 |
+
|
| 333 |
+
NOTE: It is important to recognize the speaker's intention from the speaking style before generating response. You MUST keep response as short as possible.
|
| 334 |
+
|
| 335 |
+
Here is an example:
|
| 336 |
+
|
| 337 |
+
Input:
|
| 338 |
+
|
| 339 |
+
Speaker A: My specimen is deposited into the container in the room. Janice! You're not... gone?
|
| 340 |
+
|
| 341 |
+
Speaking in a high pitch, the male speaker conveyed a touch of low energy during normal-paced conversation.
|
| 342 |
+
|
| 343 |
+
Speaker B: Oh! Sid is still in his room. So did you do it? Did you make your deposit?
|
| 344 |
+
|
| 345 |
+
Speaking swiftly, her voice remains soft and steady. With little energy, her voice reaches a high pitch.
|
| 346 |
+
|
| 347 |
+
Speaker A: Yeah! yeah... The hard part is over!
|
| 348 |
+
|
| 349 |
+
The man's words are barely audible, but he keeps up the standard pace.
|
| 350 |
+
|
| 351 |
+
Speaker B:
|
| 352 |
+
|
| 353 |
+
Output:
|
| 354 |
+
|
| 355 |
+
That's not the hard part honey! The hard part is what comes next, I mean aren't you worried about the results?
|
| 356 |
+
|
| 357 |
+
A woman speaks quickly and her high-pitched tone is a trademark of her dazed.
|
| 358 |
+
|
| 359 |
+
Input:
|
| 360 |
+
|
| 361 |
+
{dialogue history with caption}
|
| 362 |
+
|
| 363 |
+
Output:
|
| 364 |
+
|
| 365 |
+
{response content}
|
| 366 |
+
|
| 367 |
+
{response caption}
|
| 368 |
+
|
| 369 |
+
# B Prompt for Dialogue Generation without Captions
|
| 370 |
+
|
| 371 |
+
You are the last speaker in the following daily dialogue.
|
| 372 |
+
|
| 373 |
+
You MUST give a response depending on the dialogue history. You MUST keep response as short as possible.
|
| 374 |
+
|
| 375 |
+
Here is an example:
|
| 376 |
+
|
| 377 |
+
Input:
|
| 378 |
+
|
| 379 |
+
Speaker A: My specimen is deposited into the container in the room. Janice! You're not... gone?
|
| 380 |
+
|
| 381 |
+
Speaker B: Oh! Sid is still in his room. So did you do it? Did you make your deposit?
|
| 382 |
+
|
| 383 |
+
Speaker A: Yeah! yeah... The hard part is over!
|
| 384 |
+
|
| 385 |
+
Speaker B:
|
| 386 |
+
|
| 387 |
+
Output:
|
| 388 |
+
|
| 389 |
+
That's not the hard part honey! The hard part is what comes next, I mean aren't you worried about the results?
|
| 390 |
+
|
| 391 |
+
Input:
|
| 392 |
+
|
| 393 |
+
{dialogue history without caption}
|
| 394 |
+
|
| 395 |
+
Output:
|
| 396 |
+
|
| 397 |
+
{response content}
|
2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c57069d08feb7e9619a6808839985ce4fbd646f9265b74349bc2f75f2d12c009
|
| 3 |
+
size 245841
|
2024/Talk With Human-like Agents_ Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/26c880de-fdc1-4e48-8796-76b92fa174c4_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/26c880de-fdc1-4e48-8796-76b92fa174c4_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/26c880de-fdc1-4e48-8796-76b92fa174c4_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:515dca55199424841212b9ca429ea5b4e9550efc90eee1d0306d2a8653eb3255
|
| 3 |
+
size 1672434
|
2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/full.md
ADDED
|
@@ -0,0 +1,391 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TASTE: Teaching Large Language Models to Translate through Self-Reflection
|
| 2 |
+
|
| 3 |
+
Yutong Wang $^{1*}$ Jiali Zeng $^{2}$ Xuebo Liu $^{1\dagger}$ Fandong Meng $^{2}$ Jie Zhou $^{2}$ Min Zhang $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ Pattern Recognition Center, WeChat AI, Tencent Inc, China
|
| 8 |
+
|
| 9 |
+
wangyutong@stu.hit.edu.cn, {liuxuebo,zhangmin2021}@hit.edu.cn
|
| 10 |
+
|
| 11 |
+
{lemonzeng,fandongmeng,withtomzhou}@tencent.com
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks. Techniques like instruction tuning have effectively enhanced the proficiency of LLMs in the downstream task of machine translation. However, the existing approaches fail to yield satisfactory translation outputs that match the quality of supervised neural machine translation (NMT) systems. One plausible explanation for this discrepancy is that the straightforward prompts employed in these methodologies are unable to fully exploit the acquired instruction-following capabilities. To this end, we propose the TASTE framework, which stands for translating through self-reflection. The self-reflection process includes two stages of inference. In the first stage, LLMs are instructed to generate preliminary translations and conduct self-assessments on these translations simultaneously. In the second stage, LLMs are tasked to refine these preliminary translations according to the evaluation results. The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods. Our work presents a promising approach to unleash the potential of LLMs and enhance their capabilities in MT. The codes and datasets are open-sourced at https://github.com/YutongWang1216/ReflectionLLMMT.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Large language models (LLMs) like GPT-4 (OpenAI, 2023) have recently demonstrated dramatic performance across a wide range of natural language processing tasks (Bubeck et al., 2023; Liang et al., 2022). Their outstanding grasp of syntactic and semantic knowledge positions them as powerful instruments for the enhancement of machine
|
| 20 |
+
|
| 21 |
+
* Work was done when Yutong Wang was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China.
|
| 22 |
+
|
| 23 |
+
Xuebo Liu is the corresponding author.
|
| 24 |
+
|
| 25 |
+
<table><tr><td>SRC</td><td>但/But 口水鸡/Poached Chicken in Chili Sauce 整盒/whole box 是/is 生肉/row meat, 没办法/cannot 吃/eat, 是否/whether 可以/can 退款/refund?</td></tr><tr><td>REF</td><td>But the Poached Chicken in Chili Sauce is just a box of inedible raw chicken, and can I get a refund for it?</td></tr><tr><td>Normal</td><td>But the whole box of chicken is raw meat, so you can’t eat it, can you refund?</td></tr><tr><td>Stage 1</td><td>But the whole box is meat, can’t eat, can you refund \n [Bad]</td></tr><tr><td>Stage 2</td><td>But the whole box of spicy chicken is raw meat, so I can’t eat it. Can I get a refund?</td></tr></table>
|
| 26 |
+
|
| 27 |
+
Table 1: An example of the TASTE approach. "Normal" denotes the output of the baseline LLM fine-tuned on a normal parallel corpus. "Stage 1" and "Stage 2" denote the outputs of the first and second inference stages of the proposed self-reflection process, respectively. The highlight denotes the quality label predicted by the LLM. Inherent translation errors generated in the first stage, such as the red strikethrough part, are rectified in the second inference stage.
|
| 28 |
+
|
| 29 |
+
translation, capable of producing translations of superior quality (Hendy et al., 2023; Zhang et al., 2023a; Garcia and First, 2022). This substantial progress represents an evolution of the paradigm in machine translation, serving as the foundation of novel translation systems characterized by enhanced quality and reliability.
|
| 30 |
+
|
| 31 |
+
Numerous studies are underway to unlock the vast potential of machine translation within LLMs. Prompt engineering aims to design effective prompt templates to guide LLMs in accomplishing specific language tasks. Some approaches attempt to integrate additional information relevant to the translation task to enhance the performance of LLMs (Ghazvininejad et al., 2023; Lu et al., 2023; He et al., 2024; Peng et al., 2023). Studies in In-Context Learning (ICL, Brown et al., 2020) seek to provide LLMs with more relevant and high-quality translation exemplars, which assists LLMs
|
| 32 |
+
|
| 33 |
+
in retrieving bilingual knowledge, facilitating the generation of translations of the highest possible quality (Vilar et al., 2023; Agrawal et al., 2023). However, assessments of LLMs reveal that, in most translation directions, their performance falls short of that exhibited by robust supervised baselines (Zhu et al., 2023). This shortfall is due to the fact that these approaches often treat the LLM machine translation task as a simple text generation task, focusing on adjusting the prompts to enhance the outcomes. However, the intrinsic features of the machine translation task, such as the need for diverse multilingual knowledge, are often overlooked.
|
| 34 |
+
|
| 35 |
+
Some studies recommend the tuning of relatively smaller LLMs for translation (Zhu et al., 2023; Xu et al., 2023). Instruction tuning of LLMs with a limited number of high-quality supervised instructions in machine translation tasks yields remarkable results in some instances (Zeng et al., 2023; Jiao et al., 2023; Zhu et al., 2023; Hendy et al., 2023). Despite these achievements, these attempts still fail to fully leverage the capacity of LLMs due to their overly straightforward inference process. Unlike supervised NMT models, LLMs generate translations through language modeling, which contains a more complicated inference process and relies more on inherent linguistic knowledge. Studies such as Chain-of-Thought (CoT) reveal that the introduction of intermediate reasoning steps in the inference process significantly increases the reasoning capabilities of language models (Wei et al., 2022b; Kojima et al., 2022).
|
| 36 |
+
|
| 37 |
+
In this paper, we introduce TASTE, a method that aims at improving the translation performance of LLMs by instilling the ability to self-reflect on their own outputs. Specifically, we segment the LLM translation process into two stages of inference. In the first stage, LLMs are prompted to generate preliminary translations while simultaneously making quality predictions for these translations. In the second stage, we instruct LLMs to refine these preliminary translations based on the predicted quality levels to produce final candidates. An example of the proposed process can be found in Table 1. This entire process can be regarded as a form of self-reflection, mirroring the common approach employed by humans to carry out tasks more effectively and impeccably. To establish a sufficient multitask capability for executing the entire reflective translation process, we conduct supervised fine-tuning (SFT) on LLMs using a multitask training dataset. This method demonstrates a
|
| 38 |
+
|
| 39 |
+
remarkable stimulation of the potential of LLMs, providing a novel approach to enhance the translation performance of these models.
|
| 40 |
+
|
| 41 |
+
Our contributions are summarized as follows:
|
| 42 |
+
|
| 43 |
+
- We present the TASTE method, which guides LLMs through a two-stage inference process, allowing them to initially generate preliminary results and subsequently refine them into improved candidates based on their self-assessment results.
|
| 44 |
+
- We create a multi-task training set comprising tasks that are closely aligned with the TASTE process to equip LLMs with the capability to execute the whole inference process.
|
| 45 |
+
- We find that by employing the TASTE method, LLMs proficiently refine their initial translation candidates, resulting in superior final outcomes, which in turn contributes to an enhancement in their translation capabilities.
|
| 46 |
+
|
| 47 |
+
# 2 Related Work
|
| 48 |
+
|
| 49 |
+
Efforts to enhance the translation performance of LLMs can be categorized into two research lines: prompt engineering and instruction tuning. Prompt Engineering aims to design proper prompt templates and introduce prior knowledge or supplementary information to support the inference process. Dictionary-based approaches incorporate control hints in the prompt from bilingual or multilingual dictionaries to deal with rare words in source sentences (Ghazvininejad et al., 2023; Lu et al., 2023). He et al. (2024) extracts translation-related knowledge, such as topics, by self-prompting to guide the translation process. Studies in ICL (Brown et al., 2020) aim to provide LLMs with more relevant and high-quality translation exemplars. This approach assists LLMs in retrieving bilingual knowledge, facilitating the generation of translations of the highest possible quality (Vilar et al., 2023; Agrawal et al., 2023).
|
| 50 |
+
|
| 51 |
+
Instruction tuning represents an efficient method to enhance the ability of LLMs to follow natural language instructions and yield outputs that align more closely with human preference in downstream zero-shot tasks (Wei et al., 2022a; Ouyang et al., 2022; Chung et al., 2024). Jiao et al. (2023) explore several translation instructions to improve the translation performance of LLMs. Zeng et al. (2023) employ examples in comparison to instruct
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
Figure 1: The framework of our proposed TASTE method.
|
| 55 |
+
|
| 56 |
+
LLMs and calculate the additional loss. Zhang et al. (2023b) enhance the multilingual language generation and instruction following capabilities of LLMs through interactive translation tasks.
|
| 57 |
+
|
| 58 |
+
Additionally, several studies proposed to facilitate a similar reflection process, utilizing confidence-guided approaches or multi-step inference, to assist the translation procedure. Lu et al. (2022) train a confidence estimation network in parallel with the backbone network to predict the confidence levels for generated translations, determining the amount of hints the model requires to produce correct translations. Xia et al. (2017) introduce a second-pass decoder to the conventional encoder-decoder structure, polishing the initial drafts and generating the final outputs. Tan et al. (2022) divide the translation process into three stages and independently apply different continuous prompts to better shift language to translation tasks. Li et al. (2023) propose a deliberate-then-generate inference framework, where LLMs are first prompted to detect error types from given candidates and then generate their final answers. Chen et al. (2023) propose to iteratively prompt LLMs to self-correct their translations. Feng et al. (2024) introduce a self-correcting inference framework for LLMs accessible via APIs, where LLMs autonomously conduct MQM self-evaluations and refine the primary candidates based on the evaluation results. Ki
|
| 59 |
+
|
| 60 |
+
and Carpuat (2024) utilize a trained fine-grained feedback model to identify defects in generated translations, subsequently directing LLMs to refine the translations based on the feedback.
|
| 61 |
+
|
| 62 |
+
Our work represents a fusion of instruction tuning and the CoT methodology. We introduce a multi-step inference translation process in imitation of the self-reflection mechanism observed in humans. The utilization of multitask training data, including Basic Translation, Quality Prediction, and Draft Refinement, substantiates not only the multi-step inference capability but also the comprehension of nuances in translation quality.
|
| 63 |
+
|
| 64 |
+
# 3 TASTE: Translate through Reflection
|
| 65 |
+
|
| 66 |
+
# 3.1 Overall Framework
|
| 67 |
+
|
| 68 |
+
In this work, we aim to enhance the translation capabilities of LLMs by instructing them to engage in self-reflection on their translation candidates, ultimately producing carefully refined outputs. This process is achieved through a two-stage inference.
|
| 69 |
+
|
| 70 |
+
In the first stage, we ask the models to generate preliminary translations. Different from the conventional machine translation process, we also require them to predict the quality of their own outputs simultaneously. These preliminary translations are named "drafts", and their corresponding quality predictions can take the form of either approximate labels or precise scores. This stage of inference can
|
| 71 |
+
|
| 72 |
+
be formalized into the following formula:
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
(\boldsymbol {y}, q) \sim P (\boldsymbol {y}, q \mid \boldsymbol {w}, \boldsymbol {x}; \theta) \tag {1}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\begin{array}{l} P (\boldsymbol {y} _ {1: m}, q \mid \boldsymbol {w}, \boldsymbol {x}; \theta) \\ = P (q \mid \mathbf {y} _ {1: m}, \mathbf {w}, \mathbf {x}; \theta) P (\mathbf {y} _ {1: m} \mid \mathbf {w}, \mathbf {x}; \theta) \\ = P (q \mid \mathbf {y} _ {1: m}, \mathbf {w}, \mathbf {x}; \theta) \prod_ {t = 1} ^ {m} P (\mathbf {y} _ {i} \mid \mathbf {y} _ {1: t - 1}, \mathbf {w}, \mathbf {x}; \theta) \tag {2} \\ \end{array}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $\theta$ represents the parameters of the LLM, $x$ and $w$ denote the source sentence and the rest of the prompt (including the instruction), respectively. The preliminary translation $y_{1:m}$ is generated first, and the quality label (score) $q$ is generated later according to $y_{1:m}$ . The corresponding prompts of the first inference stage are illustrated in the "Inference Stage 1" box in Figure 1.
|
| 83 |
+
|
| 84 |
+
In the second stage, we guide the models to refine their drafts based on the quality predictions. Both the drafts and quality labels/scores are formatted into the input field of the prompts for LLMs. The models proceed to make appropriate adjustments to the drafts according to the predicted label/scores, yielding the final translation candidates in a refined form. This stage of inference can be formalized into the following formula:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\boldsymbol {y} ^ {\prime} \sim P (\boldsymbol {y} ^ {\prime} \mid \boldsymbol {y}, q, \boldsymbol {w} ^ {\prime}, \boldsymbol {x}; \theta) \tag {3}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\begin{array}{l} P (\boldsymbol {y} _ {1: n} ^ {\prime} \mid \boldsymbol {y}, q, \boldsymbol {w} ^ {\prime}, \boldsymbol {x}; \theta) \\ = \prod_ {t = 1} ^ {n} P \left(\boldsymbol {y} _ {i} ^ {\prime} \mid \boldsymbol {y} _ {1: t - 1} ^ {\prime}, \boldsymbol {y}, q, \boldsymbol {w} ^ {\prime}, \boldsymbol {x}; \theta\right) \tag {4} \\ \end{array}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
where $\pmb{w}^{\prime}$ denotes the new prompt employed in the second stage. The refined translation $\pmb{y}_{1:n}^{\prime}$ is generated according to the preliminary translation $\pmb{y}$ with its predicted quality level $q$ . The corresponding prompts of the second inference stage are shown in the "Inference Stage 2" box in Figure 1.
|
| 95 |
+
|
| 96 |
+
# 3.2 Multitask SFT
|
| 97 |
+
|
| 98 |
+
To ensure that LLMs achieve a comprehensive understanding of the task instructions, we conduct multitask SFT on the models. The multitasking approach consists of three components: Quality Prediction, Basic Translation, and Draft Refinement.
|
| 99 |
+
|
| 100 |
+
Quality Prediction In this sub-task, LLMs are tasked with generating translations and providing self-quality predictions for a given source sentence. The quality prediction task consists of two forms:
|
| 101 |
+
|
| 102 |
+
a) Text Classification (TC), entailing label predictions of "Good", "Medium", or "Bad", and b) Quality Estimation (QE), involving integer score prediction ranging from 0 to 100. We utilize candidates of various qualities generated by multiple systems, along with their evaluated COMET scores, to construct fine-tuning instances. Please refer to Appendix A.1 for detailed information. The ground truth of the training data would be translations with gold quality labels/scores placed in the back.
|
| 103 |
+
|
| 104 |
+
Basic Translation We utilize parallel data combined with a standardized instruction to conduct fine-tuning of LLMs for multilingual translation tasks, including German $\Leftrightarrow$ English and Chinese $\Leftrightarrow$ English language pairs. The instruction is formulated straightforwardly as "Translate from [SRC] to [TGT]". As shown in Figure 1, the Basic Translation instructions exhibit a high degree of similarity to their Quality Prediction counterparts, but they belong to two completely different tasks. To disambiguate instructions between these two tasks and prevent LLMs from obtaining low-quality translation knowledge, we follow Zeng et al. (2023) to append a distinguishing note "++++ Note: A translation with no errors could be" at the end of the Basic Translation input.
|
| 105 |
+
|
| 106 |
+
Draft Refinement In this sub-task, LLMs are asked to refine drafts based on quality labels/scores to produce final outputs. Given a source sentence and multiple candidates of various qualities, we designate the highest-scored output as the reference. The drafts are sampled from the remaining candidates, covering all quality levels. We incorporate a new field named "Hint" within the translation prompt. This field provides LLMs with translation drafts of the source sentence, with quality labels/scores placed in front of the drafts in the following format: "#Hint: Draft with quality label/score: [LABEL/SCORE] [Draft]" We fill in "label" or "score" based on whether the TC or QE approach is employed. Examples of the complete prompts are shown in Table 14.
|
| 107 |
+
|
| 108 |
+
# 4 Experimental Setups
|
| 109 |
+
|
| 110 |
+
# 4.1 Data
|
| 111 |
+
|
| 112 |
+
We employ the WMT validation set to construct the training data for the Basic Translation task and utilize the MTME multi-candidate $^{1}$ dataset, which
|
| 113 |
+
|
| 114 |
+
<table><tr><td rowspan="2">System</td><td colspan="2">Zh→En</td><td colspan="2">En→Zh</td><td colspan="2">De→En</td><td colspan="2">En→De</td><td colspan="2">Average</td></tr><tr><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td></tr><tr><td>WMT22 Winners</td><td>81.00</td><td>33.50</td><td>86.80</td><td>54.30</td><td>85.00</td><td>33.70</td><td>87.40</td><td>38.40</td><td>85.05</td><td>39.98</td></tr><tr><td>NLLB-3.3b</td><td>76.92</td><td>21.07</td><td>81.56</td><td>32.52</td><td>83.42</td><td>29.54</td><td>86.23</td><td>33.98</td><td>82.03</td><td>29.28</td></tr><tr><td colspan="11">Backbone: LLaMA</td></tr><tr><td>ParroT</td><td>75.90</td><td>20.20</td><td>80.30</td><td>30.30</td><td>82.40</td><td>27.30</td><td>81.60</td><td>26.10</td><td>80.05</td><td>25.98</td></tr><tr><td>Bayling</td><td>77.48</td><td>20.31</td><td>84.43</td><td>38.19</td><td>83.19</td><td>28.16</td><td>82.18</td><td>25.66</td><td>81.82</td><td>28.08</td></tr><tr><td>MT-Full</td><td>78.72</td><td>23.80</td><td>83.35</td><td>33.01</td><td>83.79</td><td>30.10</td><td>83.70</td><td>27.18</td><td>82.39</td><td>28.52</td></tr><tr><td>MT-FixEmb</td><td>79.02</td><td>24.30</td><td>83.62</td><td>33.33</td><td>84.05</td><td>30.62</td><td>83.66</td><td>27.75</td><td>82.59</td><td>29.00</td></tr><tr><td colspan="11">TASTE</td></tr><tr><td>Full-QE</td><td>79.17</td><td>24.27</td><td>83.90</td><td>34.25</td><td>83.83</td><td>30.49</td><td>83.38</td><td>27.16</td><td>82.57</td><td>29.04</td></tr><tr><td>Full-TC</td><td>79.31</td><td>24.23</td><td>84.00</td><td>34.51</td><td>83.92</td><td>30.17</td><td>82.95</td><td>26.74</td><td>82.55</td><td>28.91</td></tr><tr><td>FixEmb-QE</td><td>79.35</td><td>24.47</td><td>84.30</td><td>34.94</td><td>84.07</td><td>30.75</td><td>83.70</td><td>27.32</td><td>82.86</td><td>29.37</td></tr><tr><td>FixEmb-TC</td><td>79.53</td><td>24.87</td><td>84.24</td><td>34.96</td><td>84.11</td><td>31.03</td><td>83.80</td><td>27.94</td><td>82.92</td><td>29.70</td></tr><tr><td colspan="11">Backbone: BLOOM</td></tr><tr><td>ParroT</td><td>79.00</td><td>22.70</td><td>83.50</td><td>34.50</td><td>78.00</td><td>24.90</td><td>73.60</td><td>20.50</td><td>78.53</td><td>25.65</td></tr><tr><td>TIM</td><td>79.71</td><td>24.51</td><td>85.10</td><td>37.83</td><td>78.94</td><td>26.12</td><td>74.91</td><td>20.90</td><td>79.67</td><td>27.34</td></tr><tr><td>MT-Full</td><td>79.25</td><td>22.81</td><td>85.01</td><td>35.49</td><td>77.61</td><td>24.05</td><td>71.31</td><td>18.84</td><td>78.30</td><td>25.30</td></tr><tr><td>MT-FixEmb</td><td>79.84</td><td>23.43</td><td>85.20</td><td>36.68</td><td>78.27</td><td>25.07</td><td>72.06</td><td>19.41</td><td>78.84</td><td>26.15</td></tr><tr><td colspan="11">TASTE</td></tr><tr><td>Full-QE</td><td>79.36</td><td>23.15</td><td>85.05</td><td>36.84</td><td>78.42</td><td>24.87</td><td>75.41</td><td>21.18</td><td>79.56</td><td>26.51</td></tr><tr><td>Full-TC</td><td>79.14</td><td>23.04</td><td>84.94</td><td>36.75</td><td>78.74</td><td>24.97</td><td>75.53</td><td>21.13</td><td>79.59</td><td>26.47</td></tr><tr><td>FixEmb-QE</td><td>80.40</td><td>24.41</td><td>85.81</td><td>39.31</td><td>79.20</td><td>26.28</td><td>76.30</td><td>21.84</td><td>80.43</td><td>27.96</td></tr><tr><td>FixEmb-TC</td><td>80.28</td><td>24.20</td><td>85.90</td><td>39.07</td><td>78.96</td><td>26.27</td><td>76.38</td><td>21.98</td><td>80.38</td><td>27.88</td></tr></table>
|
| 115 |
+
|
| 116 |
+
Table 2: Main results of TASTE. LLaMA-2-7b and BLOOMZ-7b1-mt are chosen as the backbone model. QE and TC signify that the Quality Prediction subtask takes the form of quality estimation and text classification, respectively. The best results of each kind of backbone model are labeled using bold font.
|
| 117 |
+
|
| 118 |
+
contains source sentences and their candidate translations generated by multiple systems to build the training data for the Quality Prediction and Draft Refinement tasks. For Quality Prediction, candidates across various quality levels are sampled to form training instances. For Draft Refinements, the candidate with the highest COMET score is chosen as the reference, and the drafts to be refined are sampled from the other candidates covering various qualities. The data statistics and details of data building can be found in Appendix A.2.
|
| 119 |
+
|
| 120 |
+
To avoid possible data leakage in the training data, we evaluate the translation performance on the WMT22 test set (Kocmi et al., 2022), which covers domains such as news, social, e-commerce, and conversation. We present the translation results in German $\Leftrightarrow$ English and Chinese $\Leftrightarrow$ English directions. We report the BLEU scores by SacreBLEU (Post, 2018) and COMET scores by wmt22-comet-da (Rei et al., 2022).
|
| 121 |
+
|
| 122 |
+
# 4.2 Model Training
|
| 123 |
+
|
| 124 |
+
We employ BLOOMZ-7b1-mt² and LLaMA-2-7b³ (Touvron et al., 2023) as our backbone models.
|
| 125 |
+
|
| 126 |
+
These models are all fine-tuned for 1 epoch with a batch size of 128. The learning rates are set to 2e-5, and the weight decay parameter is set to 0.0. The maximum text length is 768. We conducted the fine-tuning on eight NVIDIA A100 GPUs, using the Deep-Speed ZeRO stage3 for acceleration.
|
| 127 |
+
|
| 128 |
+
We employ two distinct training strategies, differing in the updated parameters:
|
| 129 |
+
|
| 130 |
+
Full-Parameter Tuning (Full) In this method, all the parameters in LLMs are involved in the training process. In comparison to methods that focus on training only a small set of parameters (such as Prefix Tuning and Low-Rank Adaption), full-parameter tuning is less susceptible to overfitting due to the larger parameter space. However, the main issue with this approach is excessive memory consumption and runtime demands.
|
| 131 |
+
|
| 132 |
+
Tuning with Fixed Embedding Layer (FixEmb) The embedding layer is pre-trained on large-scale corpus and reflects the general distribution of word embeddings. Further tuning, especially when the number of trainable parameters is limited or the training corpus is not abundant enough, will introduce disturbances into these distributions, leading to a decline in the model's expressive capacity. To
|
| 133 |
+
|
| 134 |
+
overcome this problem, we freeze the embedding layers of LLMs and fine-tune the rest of the parameters. This assists LLMs in maintaining correctness and diversity in their expressions.
|
| 135 |
+
|
| 136 |
+
# 4.3 Baselines
|
| 137 |
+
|
| 138 |
+
The MT-( $\cdot$ ) baseline models represent the LLMs trained exclusively with the Basic Translation dataset, as outlined in Table 11. This dataset contains the German $\Leftrightarrow$ English and Chinese $\Leftrightarrow$ English translation directions.
|
| 139 |
+
|
| 140 |
+
Additionally, we present the results of WMT22 winners, NLLB-3.3B (Costa-jussa et al., 2022), a multilingual translation model trained in over 200 languages, Bayling (Zhang et al., 2023b), ParroT (Jiao et al., 2023), and TIM (Zeng et al., 2023), LLMs fine-tuned for machine translation with BLOOM or LLaMA as the backbone models.
|
| 141 |
+
|
| 142 |
+
# 5 Results
|
| 143 |
+
|
| 144 |
+
Our main results are shown in Table 2. Almost all of our methods outperform the corresponding MT- $(\cdot)$ baseline across both metrics and all language pairs, providing evidence of the effectiveness of our approach in enhancing the translation capabilities of LLMs. When utilizing BLOOMZ-7b1-mt as the backbone model, our FixEmb- $(\cdot)$ approaches achieve favorable results, particularly in $\mathrm{Zh}\Leftrightarrow$ En directions, and outperform ParroT and TIM across all language pairs on COMET scores. While employing LLaMA-2-7b as the backbone model, our FixEmb- $(\cdot)$ approaches also gain remarkable results, particularly in De $\Leftrightarrow$ En directions, and beat Bayling in all directions except En $\Leftrightarrow$ Zh.
|
| 145 |
+
|
| 146 |
+
There is no significant difference in translation performance observed between two different quality prediction approaches, $(\cdot)$ - $QE$ and $(\cdot)$ - $TC$ . This suggests that both of these approaches effectively aid LLMs in grasping the quality differences between varying translations.
|
| 147 |
+
|
| 148 |
+
The models trained with fixed embedding layers consistently outperform their counterparts trained with full parameters across all language pairs and both evaluation metrics. We argue that this is because fixing embedding layers during fine-tuning effectively preserves the expressive capability of LLMs against word distribution biases within the training data. This facilitates the generalization of LLMs across the word domain, mitigating overfitting and thereby enhancing their capacity to produce robust and diverse translations.
|
| 149 |
+
|
| 150 |
+
<table><tr><td>Model</td><td>PPL</td><td>Pred.↑</td><td>P↑</td><td>R↑</td><td>F1↑</td></tr><tr><td>BLOOMZ</td><td>-37.10</td><td>76.84</td><td>70.1</td><td>68.2</td><td>67.6</td></tr><tr><td>LLaMA-2</td><td>0.00</td><td>80.33</td><td>70.5</td><td>70.1</td><td>69.8</td></tr></table>
|
| 151 |
+
|
| 152 |
+
Table 3: Evaluation results on quality prediction task in $\mathrm{Zh} \Rightarrow \mathrm{En}$ direction. Precision, recall, and F1 values are calculated as weighted averages across three translation quality categories. PPL/Pred. represents Pearson's $r$ between the perplexity values/predicted scores and the COMET scores.
|
| 153 |
+
|
| 154 |
+
We also train a merged model that handles QE and TC approaches simultaneously, and conduct a comparison of the translation performance across models of different scales. Please refer to Appendix A.3 and A.4 for more details.
|
| 155 |
+
|
| 156 |
+
# 6 Analysis
|
| 157 |
+
|
| 158 |
+
Unless mentioned otherwise, the subsequent experiments are conducted in the FixEmb-TC setting.
|
| 159 |
+
|
| 160 |
+
# 6.1 How Good Are LLMs at Quality Prediction?
|
| 161 |
+
|
| 162 |
+
Quality Prediction constitutes an end-to-end process, where LLMs are instructed to predict quality labels or scores while generating translations. To validate the assertion that LLMs have genuinely acquired the capability to predict the quality of candidates, we evaluated the quality prediction outputs. For TC, we construct gold labels for the instances according to their COMET scores following the same principle mentioned in Appendix A.2 and report the precision, recall, and F1 values of the predicted labels. For QE, we assessed the Pearson's correlation coefficient between the predicted quality scores and the gold COMET scores. Additionally, we present the Pearson's correlation coefficient between the perplexity values (PPL) of the candidates and the COMET scores for comparison.
|
| 163 |
+
|
| 164 |
+
As shown in Table 3, for the TC approach, the models exhibit a commendable level of accuracy in assigning quality labels to their translations, as evidenced by F1 values surpassing 67.6. In the QE task, our models produce scores with a satisfactory correlation with COMET scores (the p-values are all smaller than 0.01), while the perplexity values demonstrate a relatively poor correlation with COMET scores. These statistics demonstrate that our models can make precise quality predictions for their own generated translations, providing a dependable reference for the Draft Refinement task.
|
| 165 |
+
|
| 166 |
+
We can also discover that LLaMA-2 outperforms
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
Figure 2: Comparison between the COMET scores of the preliminary and refined translations. The results are obtained by LLaMA-2-7b in $\mathrm{Zh}\Rightarrow \mathrm{En}$ direction.
|
| 170 |
+
|
| 171 |
+
<table><tr><td>Label</td><td>Proportion (%)</td><td>ΔCOMET</td></tr><tr><td>Good</td><td>31.89</td><td>0.45</td></tr><tr><td>Medium</td><td>32.80</td><td>2.06</td></tr><tr><td>Bad</td><td>35.31</td><td>7.79</td></tr></table>
|
| 172 |
+
|
| 173 |
+
BLOOMZ in terms of accuracy for both the QE and TC tasks, suggesting that LLaMA-2 possesses a more extensive bilingual knowledge base.
|
| 174 |
+
|
| 175 |
+
# 6.2 Effect of Draft Refinement
|
| 176 |
+
|
| 177 |
+
To analyze the influence of the Draft Refinement process (i.e., the second stage of inference), we perform the following two comparisons between the candidates obtained after the first and second inference stages.
|
| 178 |
+
|
| 179 |
+
Translation Quality We evaluate the COMET scores of the preliminary and refined translations. The results are shown in Figure 2. In the plot, each point located above the diagonal line represents an instance where a quality improvement is achieved through refinement. As the plot demonstrates, a majority of the final candidates exhibit higher quality levels than their initial counterparts.
|
| 180 |
+
|
| 181 |
+
Table 4 illustrates the proportions of preliminary translations with varying predicted quality labels and their respective average COMET score increments during the refinement process. The most significant score enhancements are observed in instances labeled as "Bad", which constitute the largest proportion of all instances. Subsequently, "Medium" instances show a moderate improvement, while "Good" instances exhibit the least noticeable enhancement. These observations highlight the ef
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
Figure 3: Comparison between the UTW percentages of the preliminary and refined translations.
|
| 185 |
+
|
| 186 |
+

|
| 187 |
+
|
| 188 |
+
Table 4: Proportions of preliminary translations with different predicted quality labels and their average COMET scores increments during refinement. These results are obtained by LLaMA-2-7b in $\mathrm{Zh} \Rightarrow \mathrm{En}$ direction.
|
| 189 |
+
|
| 190 |
+
<table><tr><td>Label</td><td>Edit Distance</td><td>COMET</td></tr><tr><td>Origin</td><td>18.98+0.00</td><td>79.53+0.00</td></tr><tr><td>Good</td><td>16.95-2.03</td><td>79.25-0.28</td></tr><tr><td>Random</td><td>18.78-0.20</td><td>79.36-0.17</td></tr><tr><td>Bad</td><td>20.20+1.22</td><td>79.51-0.02</td></tr><tr><td>Blank</td><td>18.12-0.86</td><td>79.08-0.45</td></tr></table>
|
| 191 |
+
|
| 192 |
+
Table 5: The edit distance between the preliminary and refined translations and the final COMET scores under different quality label configurations. "Origin" represents the configuration where predicted labels remain unmodified. "Blank" represents that quality labels are removed during refinement processes. These results are obtained by LLaMA-2-7b in $\mathrm{Zh} \Rightarrow \mathrm{En}$ direction.
|
| 193 |
+
|
| 194 |
+
ficacy of the Draft Refinement process in refining the preliminary translations generated in the first inference stage as well as rectifying potential generation failures, as evidenced by instances located in the top-left region of Figure 2.
|
| 195 |
+
|
| 196 |
+
Unaligned Translation Words (UTW) We measure the percentages of target words that remain unaligned in a word-to-word alignment between the source sentences and translations obtained after the first and second inference stages. The alignments are extracted using the tool developed by Dou and Neubig (2021). This measurement is also used by Hendy et al. (2023) to investigate the presence of words that have no support in the source sentences. The results are shown in Figure 3. We can observe that the amount of UTW is significantly reduced during the draft refinement process, with a decrease of more than 15 percentage points. This observation suggests that the Draft Refinement process contributes to a reduction in hallucinations within the candidates, leading to a higher level of translation precision and mitigation of potential risks within the translation systems.
|
| 197 |
+
|
| 198 |
+
# 6.3 The Role of Quality Labels
|
| 199 |
+
|
| 200 |
+
To examine the impact of the predicted quality labels on the refinement process, we conduct experi
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Method</td><td>BLEU</td><td>COMET</td></tr><tr><td>MT-FixEmb</td><td>19.41</td><td>72.06</td></tr><tr><td>TASTE</td><td>21.98</td><td>76.38</td></tr><tr><td>w/o Basic Translation</td><td>20.00</td><td>72.12</td></tr><tr><td>w/o Quality Prediction</td><td>17.86</td><td>72.26</td></tr><tr><td>w/o Draft Refinement</td><td>19.31</td><td>72.00</td></tr></table>
|
| 203 |
+
|
| 204 |
+
Table 6: Ablation Study. We report the BLEU and COMET scores in $\mathrm{En} \Rightarrow \mathrm{De}$ direction achieved by BLOOMZ-7b1-mt.
|
| 205 |
+
|
| 206 |
+
<table><tr><td>System</td><td>Zh→En</td><td>En→Zh</td><td>De→En</td><td>En→De</td></tr><tr><td>CoT-7b</td><td>74.50</td><td>73.79</td><td>79.63</td><td>74.37</td></tr><tr><td>CoT-13b</td><td>75.21</td><td>75.32</td><td>80.10</td><td>73.55</td></tr><tr><td>TASTE</td><td>79.53</td><td>84.24</td><td>84.11</td><td>83.80</td></tr></table>
|
| 207 |
+
|
| 208 |
+
ments by modifying these labels with the following configurations: a) All the labels are set to "Good". b) All the labels are set to "Bad". c) All the labels are randomly sampled among "Good", "Medium", and "Bad". d) All the labels are removed from the prompts and the model is only provided with draft translations during the refinement process. Subsequently, we perform the refinement process, and calculate the average edit distances between the preliminary and refined translations as follows:
|
| 209 |
+
|
| 210 |
+
$$
|
| 211 |
+
\begin{array}{l} \overline {{d}} = \frac {1}{n} \sum_ {\substack {i = 1 \\ n}} ^ {n} \left(1 - \operatorname {LevRatio}_{i}\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \frac {\operatorname {L e v D i s t} _ {i}}{\operatorname {l e n} _ {i} ^ {1} + \operatorname {l e n} _ {i} ^ {2}} \\ \end{array}
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
Here, $\mathrm{LevRatio}_i$ represents the Levenshtein distance radio $^4$ of the $i$ -th instance, $\mathrm{len}_i^1$ and $\mathrm{len}_i^2$ represent the lengths of two strings, respectively, and $\mathrm{LevDist}_i$ represents the Levenshtein distance between these strings.
|
| 215 |
+
|
| 216 |
+
We report the average edit distances and the COMET score of the refined translations in Table 5. In the cases where all the labels are set to "Good", the edit distances between the preliminary and refined translations are relatively small. This suggests that the model tends to make fewer modifications to the preliminary translations. Conversely, when all the labels are set to "Bad", the edit distances are relatively large, indicating that the model tends to make more modifications during refinement. Furthermore, noticeable performance decreases are observed when the labels are
|
| 217 |
+
|
| 218 |
+
Table 7: COMET scores gained by our approach and the CoT method.
|
| 219 |
+
|
| 220 |
+
<table><tr><td>System</td><td>Zh→En</td><td>En→Zh</td><td>De→En</td><td>En→De</td></tr><tr><td>ICL-2shot</td><td>77.43</td><td>78.69</td><td>82.99</td><td>78.85</td></tr><tr><td>ICL-3shot</td><td>77.89</td><td>79.60</td><td>83.05</td><td>79.27</td></tr><tr><td>ICL-4shot</td><td>77.91</td><td>79.89</td><td>83.16</td><td>79.65</td></tr><tr><td>TASTE</td><td>79.53</td><td>84.24</td><td>84.11</td><td>83.80</td></tr></table>
|
| 221 |
+
|
| 222 |
+
Table 8: COMET scores gained by our approach and the ICL method.
|
| 223 |
+
|
| 224 |
+
set to "Good", sampled randomly (i.e. Random), or removed from the prompts (i.e. Blank). These phenomena illustrate the impact of the quality labels in the refinement process, which is to assist LLMs in making reasonable adjustments based on the actual translation quality levels and generating high-quality final candidates.
|
| 225 |
+
|
| 226 |
+
# 6.4 Ablation Study
|
| 227 |
+
|
| 228 |
+
To emphasize the necessity of our multitask training set and prompt design, we conduct an ablation study. We choose BLOOMZ-7b1-mt as the backbone model and fine-tune it using various training sets with the FixEmb-TC method. BLEU and COMET scores in the $\mathrm{Zh}\Rightarrow \mathrm{En}$ direction are reported.
|
| 229 |
+
|
| 230 |
+
Our multitask training set contains three parts: Basic Translation, Quality Prediction, and Draft Refinement. To demonstrate the rationality of this task combination, we remove a specific section of the training set separately, and the consequences are shown in Table 6. The performance of the model decreases when any subset of the training date is removed. This result implies that each of the sub-tasks is essential for our approach. When the Quality Prediction data is removed from the training set, the BLEU scores exhibit the most noticeable decrease. This observation suggests that the TASTE process heavily relies on the model's ability to discern various qualities of translations.
|
| 231 |
+
|
| 232 |
+
# 6.5 Comparison with Related Methods
|
| 233 |
+
|
| 234 |
+
TASTE vs CoT Our approach is based on a two-stage inference, which is similar to the thought of CoT. To certify the superiority of our proposal, we perform a comparison with the CoT method. We apply the same prompts utilized in TASTE to guide a two-stage inference process with LLaMA-2-chat-7b and LLaMA-2-chat-13b, both of which undergo no fine-tuning process. The results are shown in Table 7. In many-to-English translation directions, the ICL method gains reasonable performance, yet our approach outperforms it significantly. In English-to-many directions, the
|
| 235 |
+
|
| 236 |
+
<table><tr><td>Period</td><td>De→En</td><td>En→De</td></tr><tr><td>Before</td><td>78.27</td><td>72.06</td></tr><tr><td>After</td><td>84.16</td><td>84.19</td></tr></table>
|
| 237 |
+
|
| 238 |
+
Table 9: COMET scores obtained before and after the post-editing process.
|
| 239 |
+
|
| 240 |
+
ICL method failed to generate stable outcomes by the inference chain, primarily due to a severe off-target issue that kept the models from producing translations in correct target languages.
|
| 241 |
+
|
| 242 |
+
TASTE vs ICL We also conduct a comparative analysis between TASTE and ICL methodologies. We employ LLaMA-2-chat-7b as the backbone model and incorporate source-target pairs randomly sampled from the Base Translation training set as examples within the prompts. The ICL experiment encompasses settings ranging from 2-shot to 4-shot scenarios. 2-shot to 4-shot settings are involved in the experiment. The results, showcased in Table 8, reveal a significant performance margin between the ICL methods and our TASTE approach.
|
| 243 |
+
|
| 244 |
+
# 6.6 TASTE as an APE Tool
|
| 245 |
+
|
| 246 |
+
In the proposed TASTE framework, the fine-tuned LLMs are employed for the evaluation and refinement of their own draft translations. This naturally leads to the question: Are the fine-tuned TASTE LLMs able to evaluate base translations generated by arbitrary systems and refine them as an Automatic Post-Editing (APE) tool?
|
| 247 |
+
|
| 248 |
+
To answer this question, we conducted an experiment utilizing TASTE as an automatic post-editing tool. Initially, we select BL00MZ-7b in the $MT$ -FixEmb baseline setting to generate base translations. Subsequently, we employ LLaMA-2-7b in the FixEmb-TC setting as the APE model. We concatenate the base translation behind the prompt for the first inference stage and input it into the APE model to generate the quality label. Finally, we format the base translation and quality label into the prompt for the second inference stage to obtain the refined translation.
|
| 249 |
+
|
| 250 |
+
The results of this experiment, as indicated by the COMET scores before and after APE, are detailed in Table 9. Notable quality enhancements through the APE process can be observed, and the results even outperform the TASTE LLaMA-2-7b model due to the multi-system voting mechanism. This indicates that TASTE can not only serve as
|
| 251 |
+
|
| 252 |
+
an effective inference framework for a single LLM but also as an APE tool to enhance translations generated by other translation systems.
|
| 253 |
+
|
| 254 |
+
# 7 Conclusion
|
| 255 |
+
|
| 256 |
+
We introduce TASTE, a novel approach that enables LLMs to translate through the self-reflection process. Our approach allows LLMs to initially generate a preliminary translation and autonomously assess its quality. Subsequently, the translation is refined based on the evaluation results, resulting in the final candidate. Our experiments and analyses provide evidence of the effectiveness of TASTE, as it successfully enhances the translation quality through the refinement process, consistently producing high-quality candidates across various translation directions. Furthermore, our findings underscore that LLMs possess significant potential for the translation quality prediction task. The translation process can leverage this capacity to discern different qualities among translations, leading to the generation of high-quality outcomes.
|
| 257 |
+
|
| 258 |
+
# Limitations
|
| 259 |
+
|
| 260 |
+
The performance enhancement introduced by our approach exhibits inconsistency across different translation directions. We assume that this phenomenon is caused by the inherent uneven multilingual knowledge within the model, and a more in-depth exploration of the underlying principles is warranted. Additionally, considering the two inference stages in the TASTE process, the computation cost is twice that of the conventional translation generation process. However, it's worth noting that this extra time consumption can be mitigated through acceleration methods, such as quantification and speculative decoding.
|
| 261 |
+
|
| 262 |
+
# Acknowledgements
|
| 263 |
+
|
| 264 |
+
This work was supported in part by the National Natural Science Foundation of China (Grant No. 62206076), Guangdong Basic and Applied Basic Research Foundation (Grant No. 2024A1515011491), Shenzhen Science and Technology Program (Grant Nos. ZDSYS20230626091203008 and KJZD20231023094700001). Xuebo Liu was sponsored by CCF-Tencent Rhino-Bird Open Research Fund. We would like to thank the anonymous reviewers and meta-reviewer for their insightful suggestions.
|
| 265 |
+
|
| 266 |
+
# References
|
| 267 |
+
|
| 268 |
+
Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad. 2023. Incontext examples selection for machine translation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8857-8873.
|
| 269 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 270 |
+
Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. ArXiv preprint, abs/2303.12712.
|
| 271 |
+
Pinzhen Chen, Zhicheng Guo, Barry Haddow, and Kenneth Heafield. 2023. Iterative translation refinement with large language models. ArXiv preprint, abs/2306.03856.
|
| 272 |
+
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2024. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 25(70):1-53.
|
| 273 |
+
Marta R Costa-jussà, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. ArXiv preprint, abs/2207.04672.
|
| 274 |
+
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112-2128, Online. Association for Computational Linguistics.
|
| 275 |
+
Zhaopeng Feng, Yan Zhang, Hao Li, Wenqiang Liu, Jun Lang, Yang Feng, Jian Wu, and Zuozhu Liu. 2024. Improving llm-based machine translation with systematic self-correction. ArXiv preprint, abs/2402.16379.
|
| 276 |
+
Xavier Garcia and Orhan First. 2022. Using natural language prompts for machine translation. ArXiv preprint, abs/2202.11822.
|
| 277 |
+
|
| 278 |
+
Marjan Ghazvininejad, Hila Gonen, and Luke Zettlemoyer. 2023. Dictionary-based phrase-level prompting of large language models for machine translation. ArXiv preprint, abs/2302.07856.
|
| 279 |
+
Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2024. Exploring human-like translation strategy with large language models. Transactions of the Association for Computational Linguistics, 12:229-246.
|
| 280 |
+
Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, and Hany Hassan Awadalla. 2023. How good are gpt models at machine translation? a comprehensive evaluation. ArXiv preprint, abs/2302.09210.
|
| 281 |
+
Wenxiang Jiao, Jen-tse Huang, Wenxuan Wang, Zhiwei He, Tian Liang, Xing Wang, Shuming Shi, and Zhaopeng Tu. 2023. Parrot: Translating during chat using large language models tuned with human translation and feedback. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 15009-15020.
|
| 282 |
+
Dayeon Ki and Marine Carpuat. 2024. Guiding large language models to post-edit machine translation with error annotations. ArXiv preprint, abs/2404.07851.
|
| 283 |
+
Tom Kocmi, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, et al. 2022. Findings of the 2022 conference on machine translation (wmt22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1-45.
|
| 284 |
+
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199-22213.
|
| 285 |
+
Bei Li, Rui Wang, Junliang Guo, Kaitao Song, Xu Tan, Hany Hassan, Arul Menezes, Tong Xiao, Jiang Bian, and JingBo Zhu. 2023. Deliberate then generate: Enhanced prompting framework for text generation. ArXiv preprint, abs/2305.19835.
|
| 286 |
+
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. ArXiv preprint, abs/2211.09110.
|
| 287 |
+
Hongyuan Lu, Haoyang Huang, Dongdong Zhang, Hao ran Yang, Wai Lam, and Furu Wei. 2023. Chain-of-dictionary prompting elicits translation in large language models. ArXiv preprint, abs/2305.06575.
|
| 288 |
+
Yu Lu, Jiali Zeng, Jiajun Zhang, Shuangzhi Wu, and Mu Li. 2022. Learning confidence for transformer-based neural machine translation. In Proceedings
|
| 289 |
+
|
| 290 |
+
of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2353-2364, Dublin, Ireland. Association for Computational Linguistics.
|
| 291 |
+
OpenAI. 2023. Gpt-4 technical report.
|
| 292 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc.
|
| 293 |
+
Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao. 2023. Towards making the most of ChatGPT for machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5622-5633, Singapore. Association for Computational Linguistics.
|
| 294 |
+
Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
|
| 295 |
+
Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins. 2022. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 578-585, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
|
| 296 |
+
Zhixing Tan, Xiangwen Zhang, Shuo Wang, and Yang Liu. 2022. MSP: Multi-stage prompting for making pre-trained language models better translators. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6131-6142, Dublin, Ireland. Association for Computational Linguistics.
|
| 297 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. ArXiv preprint, abs/2307.09288.
|
| 298 |
+
David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, and George Foster. 2023. Prompting palm for translation: Assessing strategies and performance. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15406-15427.
|
| 299 |
+
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022a. Finetuned
|
| 300 |
+
|
| 301 |
+
language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
|
| 302 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
|
| 303 |
+
Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1784-1794.
|
| 304 |
+
Haoran Xu, Young Jin Kim, Amr Sharaf, and Hany Hassan Awadalla. 2023. A paradigm shift in machine translation: Boosting translation performance of large language models. ArXiv preprint, abs/2309.11674.
|
| 305 |
+
Jiali Zeng, Fandong Meng, Yongjing Yin, and Jie Zhou. 2023. Tim: Teaching large language models to translate with comparison. ArXiv preprint, abs/2307.04408.
|
| 306 |
+
Biao Zhang, Barry Haddow, and Alexandra Birch. 2023a. Prompting large language model for machine translation: A case study. In International Conference on Machine Learning, pages 41092-41110. PMLR.
|
| 307 |
+
Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, et al. 2023b. Bayling: Bridging cross-lingual alignment and instruction following through interactive translation for large language models. ArXiv preprint, abs/2306.10968.
|
| 308 |
+
Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Lingpeng Kong, Jiajun Chen, Lei Li, and Shujian Huang. 2023. Multilingual machine translation with large language models: Empirical results and analysis. ArXiv preprint, abs/2304.04675.
|
| 309 |
+
|
| 310 |
+
# A Appendix
|
| 311 |
+
|
| 312 |
+
# A.1 Quality Prediction Task Designs
|
| 313 |
+
|
| 314 |
+
The quality prediction task is designed in two forms: text classification (TC) and quality estimation (QE).
|
| 315 |
+
|
| 316 |
+
Text Classification (TC) We instruct LLMs to categorize translations into three classes by the instruction "Translate from [SRC] to [TGT], and label the translation quality as "Good", "Medium" or "Bad". For the candidates with the top $10\%$ COMET scores, the gold
|
| 317 |
+
|
| 318 |
+
<table><tr><td>Task</td><td>Good</td><td>Medium</td><td>Bad</td></tr><tr><td>Quality Prediction</td><td>30.0k</td><td>30.0k</td><td>30.0k</td></tr><tr><td>Draft refinement</td><td>8.0k</td><td>8.0k</td><td>4.0k</td></tr></table>
|
| 319 |
+
|
| 320 |
+
Table 10: Numbers of instances of all three quality categories in the training set for the Quality Prediction and Draft Refinement sub-task.
|
| 321 |
+
|
| 322 |
+
<table><tr><td>Task</td><td>Size</td><td>Source</td></tr><tr><td>Basic Translation</td><td>45.4k</td><td>WMT Dev</td></tr><tr><td>Draft Refinement</td><td>20.0k</td><td>MTME</td></tr><tr><td>Quality Prediction</td><td>90.0k</td><td>MTME</td></tr></table>
|
| 323 |
+
|
| 324 |
+
Table 11: Data sizes and sources of the training sets.
|
| 325 |
+
|
| 326 |
+
labels are assigned as "Good", while those with the bottom $50\%$ of COMET scores are labeled as "Bad". Candidates falling within the remaining range are designated as "Medium".
|
| 327 |
+
|
| 328 |
+
Quality Estimation (QE) We request LLMs to simultaneously predict integer quality scores ranging from 0 to 100 while generating translations by the following instruction: "Translate from [SRC] to [TGT], and score the translation quality from 0 to 100." Here, the placeholders "[SRC]" and "[TGT]" denote the source and target language, respectively. We amplify the COMET scores by a factor of one hundred and round them to use as gold scores.
|
| 329 |
+
|
| 330 |
+
The QE task can be regarded as a more precise version of the TC task, which is perceived as more challenging for generative language models. The methodologies employed during the training and test phase will remain consistent.
|
| 331 |
+
|
| 332 |
+
# A.2 Data Details
|
| 333 |
+
|
| 334 |
+
WMT Development Data We use human-written validation data from previous WMT competitions as the basic MT training data to align LLMs on the MT task. Specifically, we choose the newstest2017-2021 of German $\Leftrightarrow$ English and Chinese $\Leftrightarrow$ English as our MT training set.
|
| 335 |
+
|
| 336 |
+
MTME Multi-Candidate Data This is a dataset containing source sentences and translation candidates of multiple MT systems on the WMT Metrics Shared Tasks built by Google Research. We use the candidates of newstest2019-2021 in German $\Leftrightarrow$ English and Chinese $\Leftrightarrow$ English directions to build training data for the Quality Prediction and Draft
|
| 337 |
+
|
| 338 |
+
Refinement task. For Quality Prediction, the inputs for the LLMs are the instructions and the source sentences, and the text generation labels are sampled candidates with their corresponding quality labels/scores attached at the end. For Draft Refinement, we choose the candidate with the highest COMET score among all candidates of one source sentence as the label for the LLMs, and the draft translation is sampled from the rest of them. The inputs for the LLMs are the instructions, the source sentences, and the drafts with their corresponding quality labels/scores attached in the front.
|
| 339 |
+
|
| 340 |
+
To enable the LLMs to have a good understanding of the translation quality, we carefully designed the proportion of the candidates with different quality levels. We classified the candidates into three categories by the COMET scores evaluated by wmt-22-comet-da. Candidates with the top $10\%$ COMET scores are classified as "Good", while those with the bottom $50\%$ of COMET scores are classified as "Bad". Candidates falling within the remaining range are designated as "Medium". For the Quality Prediction and Draft Refinement training set, the numbers of instances constructed by candidates of all three quality categories are shown in Table 10.
|
| 341 |
+
|
| 342 |
+
The sizes and sources of the training data for the three tasks are represented in Table 11. Examples of the complete prompts and labels for these tasks are shown in Table 14.
|
| 343 |
+
|
| 344 |
+
# A.3 Merged Model
|
| 345 |
+
|
| 346 |
+
We also train a model that merges two types of Quality Prediction approaches, Text Classification (TC) and Quality Estimation (QE), to facilitate the TASTE self-reflection process and generate both preliminary and refined translations. Users have the flexibility to specify the approach by instructing the model in the first inference stage. If the instruction is "Translate from [SRC] to [TGT], and label the translation quality as "Good", "Medium" or "Bad", then the TC approach is adopted, and the model predicts quality labels for the preliminary translation. Otherwise, if the instruction is "Translate from [SRC] to [TGT], and score the translation quality from 1 to 100", the model employs the QE approach and predicts quality scores. For training the merged model, we utilized 45.4k instances of Basic Translations, 45k instances for each of the two Quality Prediction approaches (TC and QE), and 20k instances for each of the two Draft Refinement styles
|
| 347 |
+
|
| 348 |
+
<table><tr><td rowspan="2">Model Size</td><td colspan="2">Zh⇒En</td><td colspan="2">En⇒Zh</td><td colspan="2">De⇒En</td><td colspan="2">En⇒De</td><td colspan="2">Average</td></tr><tr><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td></tr><tr><td>MT-FixEmb</td><td>79.84</td><td>23.43</td><td>85.20</td><td>36.68</td><td>78.27</td><td>25.07</td><td>72.06</td><td>19.41</td><td>78.84</td><td>26.15</td></tr><tr><td>TASTE</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>FixEmb-QE</td><td>80.40</td><td>24.41</td><td>85.81</td><td>39.31</td><td>79.20</td><td>26.28</td><td>76.30</td><td>21.84</td><td>80.43</td><td>27.96</td></tr><tr><td>FixEmb-TC</td><td>80.28</td><td>24.20</td><td>85.90</td><td>39.07</td><td>78.96</td><td>26.27</td><td>76.38</td><td>21.98</td><td>80.38</td><td>27.88</td></tr><tr><td>FixEmb-Mix-QE</td><td>79.97</td><td>24.26</td><td>85.65</td><td>38.87</td><td>78.63</td><td>26.29</td><td>75.19</td><td>21.15</td><td>79.86</td><td>27.64</td></tr><tr><td>FixEmb-Mix-TC</td><td>80.11</td><td>24.19</td><td>85.60</td><td>38.73</td><td>78.48</td><td>25.90</td><td>75.04</td><td>21.02</td><td>79.81</td><td>27.46</td></tr></table>
|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
Figure 4: COMET scores obtained from BLOOMZ across different model sizes.
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
|
| 359 |
+
Table 12: COMET and BLEU scores achieved by the merged model. FixEmb-Mix-QE and FixEmb-Mix-TC represent the results obtained by the merged model employing QE and TC approaches during the inference process, respectively.
|
| 360 |
+
|
| 361 |
+
<table><tr><td rowspan="2">Model Size</td><td colspan="2">Zh→En</td><td colspan="2">En→Zh</td><td colspan="2">De→En</td><td colspan="2">En→De</td><td colspan="2">Average</td></tr><tr><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td><td>COMET</td><td>BLEU</td></tr><tr><td>1.7B</td><td>78.15</td><td>20.76</td><td>83.67</td><td>34.96</td><td>73.82</td><td>21.80</td><td>65.53</td><td>17.21</td><td>75.29</td><td>23.68</td></tr><tr><td>3B</td><td>78.91</td><td>22.54</td><td>84.56</td><td>36.43</td><td>76.14</td><td>23.88</td><td>70.90</td><td>19.12</td><td>77.63</td><td>25.49</td></tr><tr><td>7.1B</td><td>80.28</td><td>24.20</td><td>85.90</td><td>39.07</td><td>78.96</td><td>26.27</td><td>76.38</td><td>21.98</td><td>80.38</td><td>27.88</td></tr></table>
|
| 362 |
+
|
| 363 |
+
Table 13: COMET and BLEU scores achieved by BLOOMZ across different model sizes.
|
| 364 |
+
|
| 365 |
+
(TC and QE). BL00MZ-7b1-mt is employed as the backbone model.
|
| 366 |
+
|
| 367 |
+
The results are shown in Table 12. We observe that although there is a marginal decrease in translation performance, the merged model demonstrates the capability to handle two types of quality expression approaches simultaneously and successfully conducts the normal inference process as the non-merged models.
|
| 368 |
+
|
| 369 |
+
# A.4 Effect of Model Size
|
| 370 |
+
|
| 371 |
+
We report the COMET and BLEU scores yielded by BLOOMZ of various model sizes in Figure 4 and Table 13.
|
| 372 |
+
|
| 373 |
+
We can observe that with the increase in the number of model parameters, both the median and mean scores are consistently rising. This indicates that our proposed method is robust in terms of model parameter scaling. As mentioned in §5, LLMs depend on large amounts of parameters to memorize task-specific knowledge to perform multi-tasking. In addition, the instructions we designed for differ
|
| 374 |
+
|
| 375 |
+
ent tasks are highly similar, which makes it more challenging but essential for LLMs to grasp different types of knowledge.
|
| 376 |
+
|
| 377 |
+
Another observation is that the distribution of scores achieved by larger models tends to be more concentrated than that obtained by smaller ones. This indicates that as the number of model parameters increases, the performance of LLMs is not only enhanced but also stabilized, which means bad cases occur less frequently, guaranteeing the lower bound of the capacity.
|
| 378 |
+
|
| 379 |
+
# A.5 Case Study
|
| 380 |
+
|
| 381 |
+
Several cases of the translation process of TASTE are shown in Table 15. The quality labels are predicted as "Bad" while some severe lexical or syntactic problems occur in the preliminary translations. In Case 1, an ambiguous Chinese character "扫" is inaccurately translated into "sweep", and the term "二维码" is literally translated as "two-dimensional code" instead of "QR code". In Case 2, the preliminary translation is incomplete, omit
|
| 382 |
+
|
| 383 |
+
<table><tr><td>Task</td><td>Prompt</td></tr><tr><td>Basic Translation</td><td>Write a response that appropriately completes the request.\n\nRequest:\nTranslate from Chinese to English.\n一辆1948年的福特水星汽车穿过佐治亚州门罗小镇的一群围观者,朝着小小的摩尔滩桥隆隆奔行。\n\n### Note: A translation with no errors could be\n\n### Response: A 1948 Ford Mercury passed through a group of onlookers in rural Monroe, Georgia, and rumbled toward the small Moore's Ford Bridge.</td></tr><tr><td>Quality Prediction (TC)</td><td>Write a response that appropriately completes the request.\n\n### Request:\nTranslate from English to German, and label the translation quality as “Good”, “Medium” or “Bad”\n北京大兴国际机场首航开启了北京“双机场”时代。\n\n### Response: The first flight of Beijing Daxing International Airport ushered in the era of Beijing’s “double airport.”\n[Good]</td></tr><tr><td>Quality Prediction (QE)</td><td>Write a response that appropriately completes the request.\n\n### Request:\nTranslate from Chinese to English, and score the translation quality from 0 to 100.\n7月26日在上海拍摄的公共卫生防疫专业委员会成立仪式现场。\n\n### Response: The scene of the inauguration ceremony of the Public Health Epidemic Prevention Professional Committee taken in Shanghai on July 26.\n[83]</td></tr><tr><td>Draft Refinement</td><td>Write a response that appropriately completes the request.\n\n### Request:\nTranslate from Chinese to English.\n虽然朱雨玲连追3分,但丁宁还是利用发球以11:9拿下首局。\n\n### Hint:\nDraft with quality label:\n[Bad] Although he had only three points, he took the ball to 11:9.\n\n### Note: A translation with no errors could be\n\n### Response: Although Zhu Yuling chased three points in a row, but Ding Ning used his serve to take the first set 11-9.</td></tr></table>
|
| 384 |
+
|
| 385 |
+
Table 14: Examples of the prompts and labels for the LLMs. We follow Jiao et al. (2023) to surround the inputs with "Write a response that appropriately completes the request.\n\nThe LLMs to complete specific tasks. The contents behind "Response:" are the labels for the text generation fine-tuning of the LLMs.
|
| 386 |
+
|
| 387 |
+
ting the latter part of the source sentence. In Case 3, the word order in the preliminary translation is notably awkward. All these issues are effectively addressed during the second inference stage, resulting in refined translations of high quality. Even when the predicted quality labels are designated as "Medium" or "Good", as seen in Case 4 and Case 5, the second stage inference continues to perform fine-tuning on the preliminary translations based on the actual context and linguistic nuances.
|
| 388 |
+
|
| 389 |
+
<table><tr><td colspan="2">Case 1</td></tr><tr><td>SRC</td><td>扫二维码下载贴吧客户端</td></tr><tr><td>REF</td><td>Scan QR code to download Tieba app</td></tr><tr><td>Stage 1</td><td>Sweep the two-dimensional code to download the client [Bad]</td></tr><tr><td>Stage 2</td><td>Scan the QR code to download the client</td></tr><tr><td colspan="2">Case 2</td></tr><tr><td>SRC</td><td>“部级领导干部历史文化讲座”20周年精华珍藏版 王春瑜、张大可、熊召政、葛剑雄、邓小南等14位史学泰斗、文化名家、知名学者献给部级领导干部的公开课</td></tr><tr><td>REF</td><td>“History and Culture Lectures for Ministerial Cadres” 20th Anniversary Essence Collector's Edition, an open class dedicated to ministerial cadres by 14 leading historians, cultural masters and well-known scholars including Wang Chunyu, Zhang Dake, Xiong Zhaozheng, Ge Jianxiong, Deng Xiaonan</td></tr><tr><td>Stage 1</td><td>"20th Anniversary of the History and Culture Lecture of Leading Cadres at the Ministerial Level" [Bad]</td></tr><tr><td>Stage 2</td><td>The 20th Anniversary of the "Historical and Cultural Lectures for Leading Cadres at the Ministerial Level" - 14 Famous Historians, Cultural Celebrities, and Renowned Scholars Including Wang Chunyu, Zhang Dake, Xiong Zhengxiong, and Deng Xiaonan Present Public Lectures for Leading Cadres at the Ministerial Level</td></tr><tr><td colspan="2">Case 3</td></tr><tr><td>SRC</td><td>当时刚创立的台积电,在英特尔这个巨无霸面前就是小喽罗。</td></tr><tr><td>REF</td><td>The newly established Taiwan Semiconductor Manufacturing Company was only a small player in front of Intel the giant.</td></tr><tr><td>Stage 1</td><td>At that time, the newly established Taiwan accumulator, in the face of the giant Intel, is a small squeak. [Bad]</td></tr><tr><td>Stage 2</td><td>At that time, the newly established Taiwan Semiconductor was just a small fry in the face of Intel, the giant.</td></tr><tr><td colspan="2">Case 4</td></tr><tr><td>SRC</td><td>比如专业、年龄、想学习的语言,甚至在哪个城市,这些都会和你学习或者想要从事的行业有关系的。</td></tr><tr><td>REF</td><td>Such as your major, age, the language you want to learn, and even which city you are in; all of these are related to what you want to learn or what industry you want to go into.</td></tr><tr><td>Stage 1</td><td>For example, professional, age, want to learn the language, even in which city, these will be related to your learning or want to engage in the industry. [Medium]</td></tr><tr><td>Stage 2</td><td>For example, your major, your age, the language you want to learn, and even the city you want to study in, are all related to the industry you want to work in.</td></tr><tr><td colspan="2">Case 5</td></tr><tr><td>SRC</td><td>箱中袋装水,其包装盒采用100%可回收利用,可减少包装66%,减少碳排放97%。</td></tr><tr><td>REF</td><td>The packaging of the bagged water in the box is 100% recyclable, which can reduce packaging by 66% and carbon emission by 97%.</td></tr><tr><td>Stage 1</td><td>The packaging box uses 100% recyclable packaging, which can reduce packaging by 66% and reduce carbon dioxide emissions by 97%. [Good]</td></tr><tr><td>Stage 2</td><td>The bagged water in the box is packed in a 100% recyclable packaging box, which can reduce packaging by 66% and carbon dioxide emissions by 97%.</td></tr></table>
|
| 390 |
+
|
| 391 |
+
Table 15: Cases of translation process of TASTE in Chinese $\Rightarrow$ English direction. The backbone model is LLaMA-2-7b trained with its embedding layer fixed. "Stage 1" represents the preliminary translation generated during the first inference process, and "Stage 2" represents the refined translation generated during the second inference process. The predicted quality labels for the drafts are marked using highlights.
|
2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c28bf8d594ef0cbb6e475ce9c39d2ba518c8968d203fc2f7a3f2f6ff320d6c97
|
| 3 |
+
size 1341406
|
2024/TasTe_ Teaching Large Language Models to Translate through Self-Reflection/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/c2b049d1-b436-4236-92b0-5c5a9fd00514_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/c2b049d1-b436-4236-92b0-5c5a9fd00514_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/c2b049d1-b436-4236-92b0-5c5a9fd00514_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a15f8949633e4256e269dbbcf55124ed96de206b5b6eaae073aa2759f2e5b643
|
| 3 |
+
size 1551710
|
2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/full.md
ADDED
|
@@ -0,0 +1,535 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Semantic Tasks
|
| 2 |
+
|
| 3 |
+
Viktor Moskvoretskii $^{1,2}$ , Ekaterina Neminova $^{1}$ , Alina Lobanova $^{1}$ , Alexander Panchenko $^{2,3}$ , and Irina Nikishina $^{4}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ HSE University, $^{2}$ Skoltech, $^{3}$ AIRI, $^{4}$ Universität Hamburg
|
| 6 |
+
|
| 7 |
+
{v.moskvoretskii, a.panchenko} @skol.tech, {esneminova, alobanova} @edu.hse.ru, irina.nikishina@uni-hamburg.de
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
In this paper, we explore the capabilities of LLMs in capturing lexical-semantic knowledge from WordNet on the example of the LLaMA-2-7b model and test it on multiple lexical semantic tasks. As the outcome of our experiments, we present TaxoLLaMA, the "all-in-one" model for taxonomy-related tasks, lightweight due to 4-bit quantization and LoRA. TaxoLLaMA achieves 11 SOTA results, and 4 top-2 results out of 16 tasks on the Taxonomy Enrichment, Hypernym Discovery, Taxonomy Construction, and Lexical Entailment tasks. Moreover, it demonstrates a very strong zero-shot performance on Lexical Entailment and Taxonomy Construction with no fine-tuning. We also explore its hidden multilingual and domain adaptation capabilities with a little tuning or few-shot learning. All datasets, code, and pre-trained models are available online. $^{1}$
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Recent studies in Natural Language Processing widely utilize Large Language Models (LLMs) for their capability to store extensive knowledge (Sun et al., 2023; Kauf et al., 2023; Tang et al., 2023) and to adapt quickly to different tasks via in-context learning without backpropagation (Dong et al., 2023). However, the application of LLMs to the classical lexical semantic tasks still remains understudied: for instance, no recent experiments with LLMs have been performed for the Hypernym Discovery task (Camacho-Collados et al., 2018) for different domains and languages. In Taxonomy Enrichment, LLMs are mostly used to extract vector representations which are further processed with a complex pipeline (Jiang et al., 2022).
|
| 16 |
+
|
| 17 |
+
Our work aims to investigate the capabilities of LLMs in addressing four tasks requiring taxonomic knowledge: Hypernym Discovery, Taxonomy En
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: Training procedure of TaxoLLaMA: hypernym relations from the WordNet are linearized and fed into an LLM model. The model aims at generating the correct hypernym(s) as output.
|
| 21 |
+
|
| 22 |
+
richment, Lexical Entailment, and Taxonomy Construction. We hypothesize that the model finetuned with hypernym (IS-A relationships) would be useful for solving taxonomy-related tasks. To verify this hypothesis, we develop a method inspired by (Moskvoretskii et al., 2024) to compile a taxonomy-focused instruction tuning dataset, sourced from English WordNet (Miller, 1998), to bring the implicit word knowledge of an LLM to the forefront when addressing lexical semantic tasks.
|
| 23 |
+
|
| 24 |
+
Having trained our model in this specialized setting, we are releasing the TaxoLLaMA — the finetuned version of the LLaMA-2-7b model (Touvron et al., 2023) — that is capable of solving tasks requiring taxonomic knowledge. Figure 1 presents the main idea of the model finetuning process. TaxoLLaMA operates effectively in a zero-shot setting, surpassing SOTA results in Lexical Entailment and Taxonomy Construction. With additional tuning, it also achieves SOTA performance in the Hypernym Discovery task across several languages and in half of the Taxonomy Enrichment tasks. Furthermore, we have optimized TaxoLLaMA to be
|
| 25 |
+
|
| 26 |
+
lightweight through 4-bit quantization (Dettmers et al., 2023) and the application of LoRA (Hu et al., 2022), making it feasible to run on GPU devices with only $4.8\mathrm{Gb}$ of GPU for forward pass and $5.5\mathrm{Gb}$ for fine-tuning, ensuring its accessibility for widespread use, e.g. using $\mathrm{Colab}^2$ .
|
| 27 |
+
|
| 28 |
+
The contributions of the paper are as follows:
|
| 29 |
+
|
| 30 |
+
- We introduce the use of LLMs across various lexical semantic tasks via hypernym prediction and propose an appropriate taxonomy instruction tuning method that exploits WordNet for dataset sampling.
|
| 31 |
+
- We present TaxoLLaMA - a unified model designed to address a spectrum of lexical-sematic tasks achieving state-of-the-art (SOTA) results in 11 out of 16 tasks and securing the second rank in 4 tasks.
|
| 32 |
+
- We present an instructive dataset based on English WordNet-3.0 only for training a taxonomy-based LLM and collected definitions for input words in the Taxonomy Enrichment datasets and the Lexical Entailment datasets using Wikidata<sup>3</sup> and ChatGPT<sup>4</sup>.
|
| 33 |
+
- We perform a detailed error analysis for all tasks using both manual and automatic approaches: e.g. we evaluate error patterns and model quality using ChatGPT.
|
| 34 |
+
|
| 35 |
+
# 2 Related Work
|
| 36 |
+
|
| 37 |
+
In this section, we briefly describe previous approaches to the lexical semantics tasks that we are experimenting with in the paper.
|
| 38 |
+
|
| 39 |
+
Hypernym Discovery The Hypernym Discovery task involves predicting a list of hypernyms for a given hyponym (see example in Figure 2a). The recent study introduces a taxonomy-adapted, fine-tuned T5 model (Nikishina et al., 2023). Earlier models include the Recurrent Mapping Model (RMM) (Bai et al., 2021), which employs an Multilayer Perceptron (MLP) with residual connections and a contrastive-like loss. CRIM (Bernier-Colborne and Barrière, 2018), distinguished as the best in SemEval, utilizes a similar MLP structure with a contrastive loss. The Hybrid model (Held
|
| 40 |
+
|
| 41 |
+
and Habash, 2019) combines the k-Nearest Neighbor approach with Hearst patterns, while the 300-sparsans method (Berend et al., 2018) is an enhancement to the traditional word2vec approach.
|
| 42 |
+
|
| 43 |
+
Taxonomy Enrichment This task is addressed in SemEval-2016 Task 14 (Jurgens and Pilehvar, 2016), aiming to add a new word to the correct hypernym (node) in the given taxonomy. Numerous different architectures have been proposed to solve the task in recent years. TMN (Zhang et al., 2021) exploits multiple scorers to find $\langle$ hypernym, hyponym $\rangle$ pairs for a given query concept. The TaxoEnrich (Jiang et al., 2022) employs two LSTMs (Staudemeyer and Morris, 2019) to encode ancestors and descendants information. In addition, the TaxoExpan (Shen et al., 2020) uses Graph Neural Network (GNN) (Scarselli et al., 2008) to predict whether the query is a child of an anchor concept.
|
| 44 |
+
|
| 45 |
+
Taxonomy Construction The taxonomy construction task aims to extract hypernym-hyponym relations between a given list of domain-specific terms and then construct a domain taxonomy based on them. The models for this task include Graph2Taxo (Shang et al., 2020), which employs a sophisticated GNN architecture, LMScorer. RestrictMLM (Jain and Espinosa Anke, 2022) uses zero-shot RoBERTa or GPT2 for pair relationship scoring, differing in their use of MASK or next token probabilities. TAXI+ (Aly et al., 2019) combines Hearst patterns with Poincaré embeddings for refinement of the existing approaches.
|
| 46 |
+
|
| 47 |
+
Lexical Entailment Lexical entailment is a classification task that identifies semantic relationships between phrase pairs. An example of the lexical entailment might be a hyponym "cat" which entails the existence of a hypernym "animal".
|
| 48 |
+
|
| 49 |
+
One of the recent lexical entailment models is LEAR (Vulić and Mrkšić, 2018) a fine-tuning method of transforming Euclidean space so that it reflects hyponymy-hypernymy relations. In SeVeN (Espinosa-Anke and Schockaert, 2018) relations between words are encoded. Pair2Vec (Joshi et al., 2019) and variant of GloVe introduced in (Jameel et al., 2018) use words' co-occurrence vectors and Pointwise Mutual Information. GBL ("Global" Entailment Graph) (Hosseini et al., 2018) is GNN that utilizes "local" learning and CTX ("Contextual" Entailment Graph) (Hosseini et al., 2021) is the improvement of GBL with contextual link-prediction. McKenna et al. (2023) proposes an
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
(a) Lexical semantic tasks
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
(b) Generation and ranking pipelines for solution of various lexical semantic tasks
|
| 56 |
+
Figure 2: Examples with input and output for each task are highlighted by color. Rectangle "hypernym" denotes a word generated by the model; circle means a node from the graph. Confidence score determines the existence of a relationship between the two nodes provided in the input.
|
| 57 |
+
|
| 58 |
+
etailment smoothing technique to the CTX model resulting in SOTA for the task.
|
| 59 |
+
|
| 60 |
+
# 3 Methodology
|
| 61 |
+
|
| 62 |
+
This section outlines the process of data collection and the subsequent training of the model.
|
| 63 |
+
|
| 64 |
+
# 3.1 Data Collection
|
| 65 |
+
|
| 66 |
+
To create the dataset, we apply the algorithm presented by Moskvoretskii et al. (2024), focusing on hyponym-hypernym relationships only. We sample both nouns and verbs from the WordNet-3.0 graph. To prepare our training and validation sets, we randomly pick edges to form pairs of hyponym-hypernym, the motivation for precise choice is given in Section E. If a child node links to more than one hypernym, we count each link as a separate pair. Additionally, we incorporate definitions for child nodes from WordNet to disambiguate the sense of the input word. As definitions may not be provided for some subtasks during inference (Lexical Entailment, MAG PSY, and MAG CS from Taxonomy Enrichment), we additionally generate definitions with ChatGPT for test sets that lack pre-defined explanations or take them from Wikidata. We use the web interface of ChatGPT 3.5 from February 2024 and the "gpt-3.5-turbo" model from the same period to generate definitions. The prompts for such requests and the statistics of the generated definitions are presented in the Appendix A in Examples 4-5 and in Table 7. This step is highly required: the lack of definitions can
|
| 67 |
+
|
| 68 |
+
reduce the performance of the model, as shown in Moskvoretskii et al. (2024).
|
| 69 |
+
|
| 70 |
+
Below we show a training sample from our dataset used for instruction tuning of TaxoLLaMA. It comprises a system prompt describing the desired output (1) combined with an input word selected from WordNet, along with its definition (2), and the target (3), which is the true hypernym of this input word, also sourced from WordNet:
|
| 71 |
+
|
| 72 |
+
(1) [INST] «SYS» You are a helpful assistant. List all the possible words divided with a comma. Your answer should not include anything except the words divided by a comma «/SYS»
|
| 73 |
+
(2) hyponym: tiger (large feline of forests in most of Asia having a tawny coat with black stripes) | hypernyms: [/INST]
|
| 74 |
+
(3) big cat,
|
| 75 |
+
|
| 76 |
+
The statistics of the generated datasets are provided in the next Subsection 3.2 along with the setups they were created for.
|
| 77 |
+
|
| 78 |
+
# 3.2 Training Details
|
| 79 |
+
|
| 80 |
+
We introduce two versions of our model: TaxoLLaMA, the model trained on the full WordNet-3.0 dataset for further community usage in lexical semantic tasks, and TaxoLLaMA-bench, designed for the benchmark tests. For this model, we make sure that the training set does not include any nodes from the test sets of those four tasks. The size of the training set for the first model is 44, 772
|
| 81 |
+
|
| 82 |
+
items, whereas the other model was finetuned with 36,775 samples. The TaxoLLaMA-Verb model that we experiment with in Section 5.4 is fine-tuned exclusively on the verb sub-tree from WordNet, resulting in 7,712 samples. The finetuning procedure of our models is depicted in Figure 1.
|
| 83 |
+
|
| 84 |
+
To train in this setup, we use the LLaMA-2 model with 7 billion parameters (Touvron et al., 2023). For better computational efficiency during training and inference, we quantize the model to 4 bits and fine-tune it using QLoRA (Dettmers et al., 2023), a full-precision LoRA adapter. During pretraining, we used a batch size of 32 and a learning rate of $3e^{-4}$ , applying a cosine annealing scheduler. Any further fine-tuning for different domains or languages was done with a batch size of 2 and a learning rate of $3e^{-4}$ , without using schedulers. Other details are described in Appendix F.
|
| 85 |
+
|
| 86 |
+
# 3.3 Task Adaptation
|
| 87 |
+
|
| 88 |
+
We propose two methods for adapting LLMs, finetuned with the WordNet instructive dataset. Different tasks interpret a taxonomy node in a different way and their understanding is reflected in Figure 2. For instance, Taxonomy Enrichment operates with synsets or synset names while Hypernym Discovery operates with lemmas (Figure 2a). In Figure 2b, there is no difference between "TRUE" and "two-node connected". However, they are depicted in such a way as to represent the expected outputs for two distinct tasks: Taxonomy Construction and Lexical Entailment. Taxonomy Construction focuses on creating a taxonomy graph from a list of nodes, essentially predicting the connections between them. On the other hand, lexical entailment involves determining whether a connection exists between two nodes. We benefit from this task likeness because we can train one model that would be able to solve multiple lexical semantic tasks. Despite this, it's important to note that the tasks differ in how they interpret these relations.
|
| 89 |
+
|
| 90 |
+
We assume that Taxonomy-related tasks can be solved within two approaches from our pipeline.
|
| 91 |
+
|
| 92 |
+
Generative approach involves directly applying the same procedure as used in training. Given a hyponym, we use the model to generate a list of corresponding hypernyms. We apply this approach to the Hypernym Discovery and Taxonomy Enrichment datasets.
|
| 93 |
+
|
| 94 |
+
Ranking approach involves evaluating the hypernymy relation using perplexity: a lower score
|
| 95 |
+
|
| 96 |
+
indicates a stronger relationship. Beyond assessing this relationship, we can also evaluate the hyponymy relation by simply reversing the hypernym and hyponym positions (this way we obtain reverse perplexity). The ratio between these two scores is a measure of confidence that we use for ranking. The lower the Confidence score, the higher the confidence of the model in the hyponymy relationship between the two constituents of a pair.
|
| 97 |
+
|
| 98 |
+
We apply this approach for the Taxonomy Construction and Lexical Entailment datasets with slight modifications that will be described in the respective sections 4.3 and 4.4 in more detail.
|
| 99 |
+
|
| 100 |
+
# 4 Experiments
|
| 101 |
+
|
| 102 |
+
In this section, we assess the proposed methodology and the finetuned models, TaxoLLaMA and TaxoLLaMA-bench, on four lexical semantic tasks: Hypernym Discovery, Taxonomy Enrichment, Lexical Entailment, and Taxonomy Construction. We evaluate models in a zero-shot setting and after fine-tuning on the provided train sets for each task.
|
| 103 |
+
|
| 104 |
+
# 4.1 Hypernym Discovery
|
| 105 |
+
|
| 106 |
+
We test our model on the Hypernym Discovery task from SemEval-2018 (Camacho-Collados et al., 2018) using our generative approach. This task features an English test set for general hypernyms and for two domain-specific "Music" and "Medical" sets, and general test sets for Italian and Spanish. Performance is measured using the Mean Reciprocal Rank (MRR) metric. We test a zero-shot approach, where the model is not tuned to the training datasets. The test set differs from WordNet and may involve multiple hops to hypernyms, and can also be applied to narrow domains.
|
| 107 |
+
|
| 108 |
+
# 4.2 Taxonomy Enrichment
|
| 109 |
+
|
| 110 |
+
Taxonomy Enrichment aims to identify the most appropriate placement for a missing node within a taxonomy. Continuing the approach of prior works (Zhang et al., 2021; Jiang et al., 2022), the goal is framed as ranking nodes from the graph based on their likelihood of being the hypernym, where successfully placing the node means ranking its correct hypernyms at the top. In our setup, we use the generative approach described in Section 3.3 and depicted in Figure 2b.
|
| 111 |
+
|
| 112 |
+
The Taxonomy Enrichment benchmark encompasses the WordNet Noun, WordNet Verb, MAG-PSY, and MAG-CS datasets (Jiang et al., 2022;
|
| 113 |
+
|
| 114 |
+
<table><tr><td></td><td>1A: English</td><td>2A: Medical</td><td>2B: Music</td><td>1B: Italian</td><td>1C: Spanish</td></tr><tr><td>CRIM* (Bernier-Colborne and Barrière, 2018)</td><td>36.10</td><td>54.64</td><td>60.93</td><td>-</td><td>-</td></tr><tr><td>Hybrid* (Held and Habash, 2019)</td><td>34.07</td><td>64.47</td><td>77.24</td><td>-</td><td>-</td></tr><tr><td>RMM* (Bai et al., 2021)</td><td>39.07</td><td>54.89</td><td>74.75</td><td>-</td><td>-</td></tr><tr><td>T5 (Nikishina et al., 2023)</td><td>45.22</td><td>44.73</td><td>53.35</td><td>24.04</td><td>27.50</td></tr><tr><td>300-sparsans* (Berend et al., 2018)</td><td>-</td><td>-</td><td>-</td><td>25.14</td><td>37.56</td></tr><tr><td>TaxoLLaMA zero-shot</td><td>38.05</td><td>43.09</td><td>42.7</td><td>1.95</td><td>2.21</td></tr><tr><td>TaxoLLaMA-bench zero-shot</td><td>37.66</td><td>42.2</td><td>44.36</td><td>1.47</td><td>2.08</td></tr><tr><td>TaxoLLaMA fine-tuned</td><td>54.39</td><td>77.32</td><td>80.6</td><td>51.58</td><td>57.44</td></tr><tr><td>TaxoLLaMA-bench fine-tuned</td><td>51.59</td><td>73.82</td><td>78.63</td><td>50.95</td><td>58.61</td></tr></table>
|
| 115 |
+
|
| 116 |
+
Table 1: MRR performance on Hypernym Discovery. * refers to the systems that rely on the provided dataset only, without LLM pretraining or additional data being used. Zero-shot is trained on the WordNet data only, without fine-tuning on the target dataset.
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
(a) Fine-tuning
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
(b) Few-shot learning
|
| 123 |
+
Figure 3: Experiments for domain and language adaptation on the Hypernym Discovery datasets.
|
| 124 |
+
|
| 125 |
+
Shen et al., 2020). To ensure consistency, 1000 nodes from each dataset were sampled to match the test set from TaxoExpan (Shen et al., 2020). Following Jiang et al. (2022), we consider scaled MRR (Ying et al., 2018) as the main metric, which is the regular MRR multiplied by 10 and averaged over all of a node's correct hypernyms.
|
| 126 |
+
|
| 127 |
+
# 4.3 Taxonomy Construction
|
| 128 |
+
|
| 129 |
+
This task aims to assemble a taxonomy given a list of nodes and a root. We employ datasets from TexEval-2 (Bordea et al., 2016) with "Eurovoc science", "Eurovoc environment" and "WordNet food" subtasks and the F1 measure for evaluation.
|
| 130 |
+
|
| 131 |
+
We evaluate our model with the ranking approach applied to all node pairs. Using this principle, we iteratively established a threshold below which pairs are considered to have a relationship. The threshold for the "Food" domain was set to 1.8, for "Environment" to 4.6, and for "Science" to 1.89. To further refine the graph, we eliminate cycles by deleting the edge inside a cycle with the highest perplexity. Additionally, we limit each node to a maximum of three hypernyms. For nodes associated with more than three hypernyms, only three with the lowest perplexity scores are retained.
|
| 132 |
+
|
| 133 |
+
# 4.4 Lexical Entailment
|
| 134 |
+
|
| 135 |
+
Lexical Entailment aims at identifying semantic relationships between phrase pairs. Given a pair of words, the relation of entailment holds if there are some contexts in which one word can be substituted by the other, such that the meaning of the original word can be inferred from the new one.
|
| 136 |
+
|
| 137 |
+
We utilized the ANT entailment subset (Guillou and de Vroe, 2023) (a detailed enhancement of the Levy/Holt dataset (Holt, 2019)) and Hyperlex benchmark (Vulic et al., 2017) for our experiments.
|
| 138 |
+
|
| 139 |
+
ANT Dataset This dataset contains pairs of sentences differing in one argument in syntactic structure (for example: "The audience apploaded the comedian" and "The audience observed the comedian", from Table 2 in (Guillou and de Vroe, 2023)). For these pairs, one of the relations is determined: antonymy, synonymy, directional entailment, or non-directional (which is reversed directional entailment) entailment. We treat the differing elements of the sentences as hypernym-hyonym pairs if the sentences are in one of the entailment relationships. To evaluate the entailment relations, we utilize the ratio of hypernym and hyponym ranking score, normalized via the L2 norm to represent the probability of entailment. For instance, we
|
| 140 |
+
|
| 141 |
+
<table><tr><td></td><td>MAG-CS</td><td>MAG-PSY</td><td>Noun</td><td>Verb</td></tr><tr><td>TaxoExpan (Shen et al., 2020)</td><td>19.3</td><td>44.1</td><td>39.0</td><td>32.5</td></tr><tr><td>GenTaxo (Zeng et al., 2021)</td><td>23.9</td><td>46.4</td><td>28.6</td><td>42.8</td></tr><tr><td>TMN (Zhang et al., 2021)</td><td>24.3</td><td>53.1</td><td>36.7</td><td>35.4</td></tr><tr><td>TaxoEnrich (Jiang et al., 2022)</td><td>57.8</td><td>58.3</td><td>44.2</td><td>45.2</td></tr><tr><td>TaxoLLaMA zero-shot</td><td>7.4</td><td>7.3</td><td>n/a</td><td>n/a</td></tr><tr><td>TaxoLLaMA-bench zero-shot</td><td>8.5</td><td>6.6</td><td>n/a</td><td>n/a</td></tr><tr><td>TaxoLLaMA fine-tuned</td><td>24.9</td><td>29.8</td><td>48.0</td><td>52.4</td></tr><tr><td>TaxoLLaMA-bench fine-tuned</td><td>30.2</td><td>31.4</td><td>45.9</td><td>51.9</td></tr></table>
|
| 142 |
+
|
| 143 |
+
calculate the perplexity for "move" as a hypernym of "walk" $(PPL_{m\rightarrow w})$ and vice versa $(PPL_{w\rightarrow m})$ . The ratio $\frac{PPL_{m\rightarrow w}}{PPL_{w\rightarrow m}}$ of these scores will thus indicate the model's confidence.
|
| 144 |
+
|
| 145 |
+
HyperLex Dataset This dataset focuses on the entailment for verbs and nouns, evaluating on a scale from 0 to 10. A score of 0 indicates no entailment, while 10 means strong entailment. The goal is to achieve the highest correlation with the gold-standard scores. For Hyperlex, we consider the ranking approach with no additional processing.
|
| 146 |
+
|
| 147 |
+
Previous methods generate embeddings and train a simple SVM on the Hyperlex training set. Finetuned models like RoBERTa demand substantial computational efforts and are tailored to the Hyperlex dataset. Compared to those prior studies, our zero-shot model uses perplexities directly as the predictions without a need for training. Therefore, a direct comparison might overlook the unique methodologies and resource implications, suggesting that each approach should be evaluated within its specific context.
|
| 148 |
+
|
| 149 |
+
# 5 Results
|
| 150 |
+
|
| 151 |
+
This section describes the main results of generative and ranking setup experiments for all tasks.
|
| 152 |
+
|
| 153 |
+
# 5.1 Hypernym Discovery
|
| 154 |
+
|
| 155 |
+
The results for the English language in Table 1, indicate that both the fine-tuned TaxoLLaMA and TaxoLLaMA-bench outperform previous SOTA results by a large margin. While the zero-shot performance of our models may be lower than when fine-tuned, they still deliver comparable outcomes to previous results for general English tasks and do not fall far behind in domain-specific tasks, considering that previous approaches are all fine-tuned.
|
| 156 |
+
|
| 157 |
+
Table 2: Scaled MRR Across Tasks for Taxonomy Enrichment. Here, "n/a" stands for "not applicable", as TaxoLLaMA has already seen WordNet data and its performance cannot be considered as zero-shot. Zero-shot is trained on the WordNet data only, without fine-tuning on the target dataset.
|
| 158 |
+
|
| 159 |
+
<table><tr><td></td><td>S</td><td>E</td><td>F</td></tr><tr><td>TexEval-2 best (Bordea et al., 2016)</td><td>31.3</td><td>30.0</td><td>36.01</td></tr><tr><td>TAXI+ (Aly et al., 2019)</td><td>41.4</td><td>30.9</td><td>34.1</td></tr><tr><td>Graph2Taxo pure (Shang et al., 2020)</td><td>39.0</td><td>37.0</td><td>-</td></tr><tr><td>Graph2Taxo best (Shang et al., 2020)</td><td>47.0</td><td>40.0</td><td>-</td></tr><tr><td>LMScorer (Jain and Espinosa Anke, 2022)</td><td>31.8</td><td>26.4</td><td>24.9</td></tr><tr><td>RestrictMLM (Jain and Espinosa Anke, 2022)</td><td>37.9</td><td>23.0</td><td>24.9</td></tr><tr><td>TaxoLLaMA</td><td>44.55</td><td>45.13</td><td>51.71</td></tr><tr><td>TaxoLLaMA-bench</td><td>42.36</td><td>44.82</td><td>51.18</td></tr></table>
|
| 160 |
+
|
| 161 |
+
Table 3: F1 score for the Taxonomy Construction Task. "S" stand for the (S)cience dataset, "E" for the (E)nvironment dataset, and "F" stands for the (F)ood domain dataset. Zero-shot is trained on the WordNet data only, without fine-tuning on the target dataset.
|
| 162 |
+
|
| 163 |
+
Multilingual Performance For Italian and Spanish, the fine-tuned model surpasses previous SOTA results. We might assume the model's effectiveness in a multilingual setting, knowing that LLaMA-2 is initially multilingual and that previous finetuning was performed exclusively on English pairs. However, we observe that the zero-shot performance struggles to generate accurate hypernyms for languages other than English. It is worth mentioning that both Italian and Spanish data were not included in the instruction tuning dataset.
|
| 164 |
+
|
| 165 |
+
Zero-shot Performance To investigate this zero-shot underperformance, we analyzed the effects of fine-tuning on both domains and languages, as shown in Figure 3a. It's clear that, except for task 2B, the model exceeds previous SOTA results with just 50 samples for fine-tuning. Additionally, the fluctuating scores highlight the model's sensitivity to the quality and nature of the training data.
|
| 166 |
+
|
| 167 |
+
Few-shot Performance We also explored the few-shot learning approach for the Italian and Spanish languages to assess the model adaptability in an in-context learning environment, as shown in Figure 3b. The model surpassed previous SOTA benchmarks with a near-logarithmic pattern of improvement for the Italian language with 30 and 50 shots, yet not performing as well for Spanish. We attribute the suboptimal few-shot to the 4-bit quantization and its relatively small size. Smaller models typically exhibit lower performance across various benchmarks compared to their larger counterparts, as illustrated by the example of LLaMA-2 (Touvron et al., 2023). Additionally, the capacity of smaller models or quantized models is also inferior compared to larger models, a finding corroborated by previous research (Wang et al., 2022; Frantar et al., 2023; Lin et al., 2024; Egiazarian
|
| 168 |
+
|
| 169 |
+
<table><tr><td></td><td>AUCN</td><td>AP</td></tr><tr><td>GBL (Hosseini et al., 2018)</td><td>3.79</td><td>58.36</td></tr><tr><td>CTX (Hosseini et al., 2021)</td><td>15.44</td><td>65.66</td></tr><tr><td>GBL-PK=4 (McKenna et al., 2023)</td><td>13.91</td><td>64.71</td></tr><tr><td>CTX-PK=4 (McKenna et al., 2023)</td><td>25.86</td><td>67.47</td></tr><tr><td>TaxoLLaMA zero-shot</td><td>0.89</td><td>51.61</td></tr><tr><td>TaxoLLaMA-bench zero-shot</td><td>2.82</td><td>54.24</td></tr><tr><td>TaxoLLaMA-verb zero-shot</td><td>19.28</td><td>69.51</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 4: Performance on the Lexical Entailment ANT dataset. Zero-shot is trained on the WordNet data only, without fine-tuning on the target dataset.
|
| 172 |
+
|
| 173 |
+
<table><tr><td>Setting</td><td>Model</td><td>Lexical</td><td>Random</td></tr><tr><td rowspan="3">fine-tuned</td><td>RoBERTa best (Pitarch et al., 2023)</td><td>79.4</td><td>82.8</td></tr><tr><td>RoBERTa mean (Pitarch et al., 2023)</td><td>65.8</td><td>63.8</td></tr><tr><td>LEAR (Vulić and Mrkšić, 2018)</td><td>54.4</td><td>69.2</td></tr><tr><td rowspan="6">zero-shot</td><td>Relative (Camacho-Collados et al., 2019)</td><td>54.3</td><td>58.4</td></tr><tr><td>Pair2Vec (Joshi et al., 2019)</td><td>33.4</td><td>54.3</td></tr><tr><td>GRV SI (Jameel et al., 2018)</td><td>48.3</td><td>55.4</td></tr><tr><td>SeVeN (Espinosa-Anke and Schockaert, 2018)</td><td>46.9</td><td>62.7</td></tr><tr><td>FastText</td><td>43.9</td><td>54.3</td></tr><tr><td>TaxoLLaMA</td><td>70.2</td><td>59.3</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 5: Spearman Correlation for lexical and random test subsets of Hyperlex benchmark. Zero-shot is trained on the WordNet data only, without fine-tuning on the target dataset.
|
| 176 |
+
|
| 177 |
+
et al., 2024). The advantage gained from few-shot learning scenarios is less pronounced in quantized models compared to full-precision models. This observation has been specifically documented in the paper of Lin et al. (2024).
|
| 178 |
+
|
| 179 |
+
# 5.2 Taxonomy Enrichment
|
| 180 |
+
|
| 181 |
+
The results presented in Table 2 show that our model surpasses all previous methods on the WordNet Noun and WordNet Verb datasets but does not perform as well as the current SOTA method on the more specialized MAG-CS and MAG-PSY taxonomies even after fine-tuning. We also notice that TaxoLLaMA-bench, having less data, unexpectedly performed better on the MAG datasets. To delve deeper into the reasons behind overall underperformance, we conducted a comprehensive error analysis, detailed in Section 6.1.
|
| 182 |
+
|
| 183 |
+
# 5.3 Taxonomy Construction
|
| 184 |
+
|
| 185 |
+
The results in Table 3 demonstrate that applying our method directly leads to SOTA performance on the "Environment" and "Food" datasets, and secures a second-place ranking for the "Science" dataset. Further analysis of the graphs generated through our modeling is provided in Section 6.2.
|
| 186 |
+
|
| 187 |
+
# 5.4 Lexical Entailment
|
| 188 |
+
|
| 189 |
+
The results of TaxoLLaMA on the Lexical Entailment datasets surpassed our expectations.
|
| 190 |
+
|
| 191 |
+
Results on the ANT Dataset From the results on the ANT dataset in Table 4, we benchmark our models against prior SOTA performances. A notable finding is the obvious difference in performance between TaxoLLaMA, which is trained on both nouns and verbs, and TaxoLLaMA-verb, which focuses solely on verbs.
|
| 192 |
+
|
| 193 |
+
TaxoLLaMA-verb outperforms TaxoLLaMA in Lexical Entailment, suggesting difficulties in
|
| 194 |
+
|
| 195 |
+
processing nouns and verbs simultaneously that might impede verb learning, possibly due to quantization and LORA adapter tuning constraints. This issue seems specific to the entailment task, as it does not emerge in other tasks, such as Taxonomy Enrichment, which also includes a verb dataset. This discrepancy could stem from metrics requiring precise normalized perplexity ranking.
|
| 196 |
+
|
| 197 |
+
Table 4 shows that TaxoLLaMA-verb achieves SOTA performance on Average Precision and is second by normalized AUC. The comparison with previous SOTA results is skewed, as the best-performing models benefited from the use of additional Entailment Smoothing (McKenna et al., 2023) on top of the model. This technique has yet to be applied to our models, which might be a promising direction for future enhancements.
|
| 198 |
+
|
| 199 |
+
Results on the HyperLex Dataset Table 5 demonstrates the superiority of our model over the previous SOTA in a zero-shot context for the "Lexical" subset and a second-place ranking for the "Random" subset. Contrary to the common trend where other models score higher on the random subset, our method does not follow this pattern, suggesting that the larger training size of the random subset benefits other methods more. Despite the straightforward zero-shot approach of our model, it still achieves notably high results. Future work could explore using this score as a meta-feature in task-specific models or adapting our entire model more closely to this task.
|
| 200 |
+
|
| 201 |
+
# 6 Error Analysis
|
| 202 |
+
|
| 203 |
+
In this section, we analyze the errors made by the TaxoLLaMA model, explore the reasons behind these inaccuracies, and suggest potential strategies for mitigation of LLMs applied to taxonomies.
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
Figure 4: Average percentage of error types across Hypernym Discovery and Taxonomy Enrichment datasets.
|
| 207 |
+
|
| 208 |
+
# 6.1 Hypernym Discovery and Taxonomy Enrichment
|
| 209 |
+
|
| 210 |
+
As we apply the same generative approach for both Hypernym Discovery and Taxonomy Enrichment we perform the joint error analysis. We split the process into four steps: (i) manual analysis to identify the most common errors; (ii) automatic analysis of errors using ChatGPT; (iii) comparing and merging the most common errors identified; (iv) classification of the errors using ChatGPT.
|
| 211 |
+
|
| 212 |
+
First, we take about 200 random examples from both Hypernym Discovery and Taxonomy Enrichment datasets and write explanations of why the model fails to generate the correct hypernym. The following four classes are identified: (i) predicted hypernyms are too broad; (ii) incorrect/irrelevant definition generated by ChatGPT; (iii) the model was unable to generate relevant candidates in the same semantic field; (iv) miscellaneous cases.
|
| 213 |
+
|
| 214 |
+
We also use the prompt in Example 6 to ask ChatGPT to generate error types. The output is provided in Example 7; Table 8 summarizes all the error types generated during several runs. Then, we merge automatically and manually identified error types into the following classes:
|
| 215 |
+
|
| 216 |
+
1. Overly Broad Predictions: The model often generates predictions encompassing a broader concept than the true hypernym.
|
| 217 |
+
2. Underly Broad Predictions: Conversely, some predictions are too narrow and fail to capture the broader concept represented by the true hypernym.
|
| 218 |
+
3. Inaccurate Predictions: The model may predict words that are very semantically close to the true hypernym but struggles with fitting into the exact wording
|
| 219 |
+
4. Conceptual Ambiguity: The model may
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
Figure 5: Automatic Evaluation of the MAG datasets with the ChatGPT model. "True" denotes the number of gold answers that ChatGPT preferred over TaxoLLaMA answers; "Predicted" is when ChatGPT preferred TaxoLLaMA output; "Both" and "None" options were also possible answers for ChatGPT.
|
| 223 |
+
|
| 224 |
+
struggle with ambiguous input words or concepts, leading to incorrect predictions.
|
| 225 |
+
|
| 226 |
+
5. Incorrect definitions: The model gets confused with the incorrect/inaccurate definition retrieved from external sources.
|
| 227 |
+
|
| 228 |
+
We used the prompt 8 presented in Appendix A to classify incorrectly predicted instances. The results for each task and each dataset are presented in Appendix B in Table 9 and in Figure 4 for average distribution. We also provide Table 10 with an example for each error type. The most common issue (75% of cases) is overly broad predicted concepts. It can be explained by the model adaptation to domain datasets that are richer than WordNet, like the "Music" and "Medical" domains. For Italian and Spanish, significant inaccuracies were attributed to grammatical complexities, due to the dataset limitations, linguistic intricacies, and lack of pretraining data. Similarly, MAG datasets faced issues with specificity and ambiguity, which led to lower results of TaxoLLaMA compared with Wordnet-based ones, as shown in Table 2.
|
| 229 |
+
|
| 230 |
+
Manual examination of MAG taxonomies reveals misaligned instances, like "olfactory toxicity in fish" being classified as a hyponym of "neuroscience". Furthermore, we assess the accuracy of the predicted hypernyms using ChatGPT, inspired by contemporary research (Rafailov et al., 2023). We provide the inputs, predicted nodes, and ground truth nodes to ChatGPT, asking for preference. As depicted in Figure 5, ChatGPT mostly prefers neither of the answers and ground truth hypernyms only slightly more frequently than the predicted ones. The example of the input query is presented
|
| 231 |
+
|
| 232 |
+
<table><tr><td>Metric</td><td>Science</td><td>Environment</td><td>Food</td></tr><tr><td colspan="4">Original</td></tr><tr><td># Nodes</td><td>125</td><td>261</td><td>1486</td></tr><tr><td># Edges</td><td>124</td><td>261</td><td>1533</td></tr><tr><td colspan="4">Constructed</td></tr><tr><td># Nodes</td><td>78</td><td>216</td><td>1132</td></tr><tr><td># Edges</td><td>71</td><td>507</td><td>1372</td></tr><tr><td># Nodes Missing</td><td>48</td><td>45</td><td>354</td></tr><tr><td># Weak Components</td><td>8</td><td>5</td><td>51</td></tr><tr><td># Nodes w/o original hypernym</td><td>4</td><td>5</td><td>39</td></tr><tr><td># Nodes w/o path to original hypernym</td><td>29</td><td>70</td><td>308</td></tr><tr><td># Nodes w/ path to original hypernym</td><td>44</td><td>140</td><td>784</td></tr><tr><td>Mean Distance to original hypernym</td><td>1.02</td><td>1.15</td><td>1.06</td></tr></table>
|
| 233 |
+
|
| 234 |
+
Table 6: Statistics of original graph and the constructed graph with highest F1 score. The lower part of the table corresponds to constructed graph statistics
|
| 235 |
+
|
| 236 |
+
in Appendix A in Example 9.
|
| 237 |
+
|
| 238 |
+
We also evaluate the overlap between MAG datasets with WordNet data, discovering minimal correspondence. Only $5\%$ of the nodes from the entire graph are present in the WordNet graph, with just $2\%$ of edges for CS and $4\%$ for PSY matching. For $92\%$ of these, there is no path in the WordNet graph. Among the remaining connections, we see that $28\%$ in CS and $10\%$ in PSY represent cases where nodes are hypernyms of themselves. This also partially explains the low results of TaxoLLaMA, as MAG datasets greatly differ from the data used for TaxoLLaMA training.
|
| 239 |
+
|
| 240 |
+
Finally, we visualize the embeddings, revealing a notable discrepancy between predictions and ground truth in the MAG subsets—which was not seen with WordNet. ADetailed observations of this analysis are documented in Appendix C.
|
| 241 |
+
|
| 242 |
+
# 6.2 Taxonomy Construction
|
| 243 |
+
|
| 244 |
+
Our analysis of the predicted graphs across various domain datasets, based on statistics from Table 6, reveals consistent patterns. Generally, the gold standard graphs feature more edges, except in the environment domain. The model often omits entire clusters of nodes rather than individual ones: about $30\%$ of nodes in the graph constructed with TaxoLLaMA lack a path to their actual parents, indicating they reside in separate components.
|
| 245 |
+
|
| 246 |
+
Nevertheless, existing paths are of a rather high quality, suggesting the model is performing either very accurately or completely off-target. The model assigns high perplexity to certain paths which are further incorrectly excluded. This tendency indicates a particular challenge with concepts that are neither too specific nor too general but fall in the middle of the taxonomy.
|
| 247 |
+
|
| 248 |
+
The nature of perplexity as a relative metric contributes to this issue, as some edges may not be created due to surpassing the perplexity threshold. Adjusting the threshold introduces incorrect edges, urging us to consider alternative approaches like using LLMs as embeddings.
|
| 249 |
+
|
| 250 |
+
# 6.3 Lexical Entailment
|
| 251 |
+
|
| 252 |
+
Our examination of the ANT dataset showed it has nearly 3000 test samples but only 589 unique verbs. This means that errors on one verb could be replicated throughout the dataset. However, examining the overlap with WordNet revealed only 7 verbs were found in the same form. Lemmatization increases the count to 338, but about $42\%$ of unique verbs still are not found in WordNet. No paths for the verbs presented in WordNet are found, which might have influenced model performance on the task. Hyperlex demonstrates better statistics, with nearly half of the words being unique and $88\%$ found in WordNet. Only $27\%$ of pairs are presented in the taxonomy, and $99\%$ lack a connecting path.
|
| 253 |
+
|
| 254 |
+
Perplexity-related errors show high values for polysemous pairs (e.g., "spade is a type of card") and low values for synonyms or paraphrases, indicating a semantic closeness but no hypernymy relation. This points to the model's struggle with lexical diversity and ambiguity, emphasizing the need for disambiguation abilities in entailment tasks. Additional analysis is available in Appendix D.
|
| 255 |
+
|
| 256 |
+
# 7 Conclusion
|
| 257 |
+
|
| 258 |
+
In this paper, we introduce TaxoLLaMA—an LLM finetuned on WordNet-3.0, capable of solving various lexical semantic tasks via hypernym prediction. It achieved SOTA results in 11 out of 16 tasks and securing the second position in 4 tasks.
|
| 259 |
+
|
| 260 |
+
Manual and ChatGPT-based error analysis shows that the most errors (75%) are overly broad predicted concepts, due to overfitting to the idiosyncratic WordNet structure and inability to adapt to the target datasets. Experiments showed that, definitions greatly contribute to the final scores for Taxonomy Enrichment, similarly to (Moskvoretskii et al., 2024), as they help to better disambiguate input words. Regarding error analysis, the most difficult datasets were MAGs (Jiang et al., 2022), as they greatly differ from the data used for training of our model.
|
| 261 |
+
|
| 262 |
+
# Limitations
|
| 263 |
+
|
| 264 |
+
We find that the main limitations of our work are as following:
|
| 265 |
+
|
| 266 |
+
- Dozens of large pre-trained generative models exist and we report results only on LLaMA-2. An alternative base LLM used could further improve the results. However, our experiments showed that LLaMA-2 showed decent performance on hypernymy prediction compared to other models. Moreover, our goal was also to provide a lightweight model that could be of further research with limited resources. Finally, the research focused on the LLM application and not on an exhaustive search of all LLM models.
|
| 267 |
+
- We did not apply the "Ranking" approach to the Taxonomy Enrichment dataset, which would be also possible, as finding the most appropriate node for the input word could also be seen as ranking. However, the first experiments showed lower results.
|
| 268 |
+
- Possible "hypernymy hallucination" may also be considered as a limitation: apart from the generalization capabilities the model may overpredict types, or even invent new words or semantic categories.
|
| 269 |
+
- Another specificity of our model is its possible excessive focus on a single word sense, which may result in the inability to generate a wider variety of options.
|
| 270 |
+
- We tried to be exhaustive, yet we possibly did not cover some taxonomy-related tasks.
|
| 271 |
+
|
| 272 |
+
# Ethical Statement
|
| 273 |
+
|
| 274 |
+
In our research, we employ advanced neural models like LLaMA-2, which have been pre-trained on a diverse corpus, including user-generated content. Although the creators of these models have endeavored to remove harmful or biased data, it is important to recognize that some biases may still persist in the model outputs.
|
| 275 |
+
|
| 276 |
+
This acknowledgment does not undermine the validity of our methods. We have designed our techniques to be flexible, allowing them to be applied to alternative pre-trained models that have undergone more rigorous debiasing processes. To the best of our knowledge, aside from the challenge of mitigating inherent biases, our work does not raise any additional ethical concerns.
|
| 277 |
+
|
| 278 |
+
# Acknowledgements
|
| 279 |
+
|
| 280 |
+
This work was supported by the DFG through the project "ACQuA: Answering Comparative Questions with Arguments" (grants BI 1544/7-1 and HA 5851/2-1) as part of the priority program "RATIO: Robust Argumentation Machines" (SPP 1999).
|
| 281 |
+
|
| 282 |
+
The work of Viktor Moskvoretskii was supported by Analytical center under the RF Government (subsidy agreement 000000D730321P5Q0002, Grant No. 70-2021-00145 02.11.2021).
|
| 283 |
+
|
| 284 |
+
# References
|
| 285 |
+
|
| 286 |
+
Rami Aly, Shantanu Acharya, Alexander Ossa, Arne Kohn, Chris Biemann, and Alexander Panchenko. 2019. Every child should have parents: A taxonomy refinement algorithm based on hyperbolic term embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4811-4817, Florence, Italy. Association for Computational Linguistics.
|
| 287 |
+
Yuhang Bai, Richong Zhang, Fanshuang Kong, Junfan Chen, and Yongyi Mao. 2021. Hypernym discovery via a recurrent mapping model. In *Findings of the Association for Computational Linguistics: ACLIJCNLP* 2021, pages 2912-2921, Online. Association for Computational Linguistics.
|
| 288 |
+
Gábor Berend, Marton Makrai, and Péter Földiák. 2018. 300-sparsans at SemEval-2018 task 9: Hypernymy as interaction of sparse attributes. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 928–934, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 289 |
+
Gabriel Bernier-Colborne and Caroline Barrière. 2018. CRIM at SemEval-2018 task 9: A hybrid approach to hypernym discovery. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 725-731, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 290 |
+
Georgeta Bordea, Els Lefever, and Paul Buitelaar. 2016. SemEval-2016 task 13: Taxonomy extraction evaluation (TExEval-2). In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval2016), pages 1081-1091, San Diego, California. Association for Computational Linguistics.
|
| 291 |
+
Jose Camacho-Collados, Claudio Delli Bovi, Luis Espinosa-Anke, Sergio Oramas, Tommaso Pasini, Enrico Santus, Vered Shwartz, Roberto Navigli, and Horacio Saggion. 2018. SemEval-2018 task 9: Hypernym discovery. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 712-724, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 292 |
+
Jose Camacho-Collados, Luis Espinosa-Anke, Shoaib Jameel, and Steven Schockaert. 2019. A latent variable model for learning distributional relation vectors.
|
| 293 |
+
|
| 294 |
+
In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 4911-4917. International Joint Conferences on Artificial Intelligence Organization.
|
| 295 |
+
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. QLoRA: Efficient finetuning of quantized LLMs. In Thirty-seventh Conference on Neural Information Processing Systems.
|
| 296 |
+
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey on in-context learning.
|
| 297 |
+
Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, and Dan Alistarh. 2024. Extreme compression of large language models via additive quantization.
|
| 298 |
+
Luis Espinosa-Anke and Steven Schockaert. 2018. SeVeN: Augmenting word embeddings with unsupervised relation vectors. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2653-2665, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 299 |
+
Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. 2023. GPTQ: Accurate post-training quantization for generative pre-trained transformers.
|
| 300 |
+
Liane Guillou and Sander Bijl de Vroe. 2023. ANT dataset.
|
| 301 |
+
William Held and Nizar Habash. 2019. The effectiveness of simple hybrid systems for hypernym discovery. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3362-3367, Florence, Italy. Association for Computational Linguistics.
|
| 302 |
+
Xavier Holt. 2019. Probabilistic models of relational implication.
|
| 303 |
+
Mohammad Javad Hosseini, Nathanael Chambers, Siva Reddy, Xavier R. Holt, Shay B. Cohen, Mark Johnson, and Mark Steedman. 2018. Learning typed entailment graphs with global soft constraints. Transactions of the Association for Computational Linguistics, 6:703-717.
|
| 304 |
+
Mohammad Javad Hosseini, Shay B. Cohen, Mark Johnson, and Mark Steedman. 2021. Open-domain contextual link prediction and its complementarity with entailment graphs. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2790-2802, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 305 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
|
| 306 |
+
|
| 307 |
+
Devansh Jain and Luis Espinosa Anke. 2022. Distilling hypernymy relations from language models: On the effectiveness of zero-shot taxonomy induction. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 151-156, Seattle, Washington. Association for Computational Linguistics.
|
| 308 |
+
Shoaib Jameel, Zied Bouraoui, and Steven Schockaert. 2018. Unsupervised learning of distributional relation vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 23-33, Melbourne, Australia. Association for Computational Linguistics.
|
| 309 |
+
Minhao Jiang, Xiangchen Song, Jieyu Zhang, and Jiawei Han. 2022. TaxoEnrich: Self-supervised taxonomy completion via structure-semantic representations. In Proceedings of the ACM Web Conference 2022, WWW '22, page 925-934, New York, NY, USA. Association for Computing Machinery.
|
| 310 |
+
Mandar Joshi, Eunsol Choi, Omer Levy, Daniel Weld, and Luke Zettlemoyer. 2019. *Pair2vec: Compositional word-pair embeddings for cross-sentence inference*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 3597-3608, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 311 |
+
David Jurgens and Mohammad Taher Pilehvar. 2016. SemEval-2016 task 14: Semantic taxonomy enrichment. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1092-1102, San Diego, California. Association for Computational Linguistics.
|
| 312 |
+
Carina Kauf, Anna A Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko, and Alessandro Lenci. 2023. Event knowledge in large language models: the gap between the impossible and the unlikely. Cognitive Science, 47(11):e13386.
|
| 313 |
+
Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024. AWQ: Activation-aware weight quantization for llm compression and acceleration. In MLSys.
|
| 314 |
+
Nick McKenna, Tianyi Li, Mark Johnson, and Mark Steedman. 2023. Smoothing entailment graphs with language models. In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 551-563, Nusa Dua, Bali. Association for Computational Linguistics.
|
| 315 |
+
George A Miller. 1998. WordNet: An electronic lexical database. MIT press.
|
| 316 |
+
|
| 317 |
+
Viktor Moskvoretskii, Alexander Panchenko, and Irina Nikishina. 2024. Are large language models good at lexical semantics? a case of taxonomy learning. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 1498-1510, Torino, Italia. ELRA and ICCL.
|
| 318 |
+
Irina Nikishina, Polina Chernomorchenko, Anastasiia Demidova, Alexander Panchenko, and Chris Biemann. 2023. Predicting terms in IS-a relations with pre-trained transformers. In Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings), pages 134-148, Nusa Dua, Bali. Association for Computational Linguistics.
|
| 319 |
+
Lucia Pitarch, Jordi Bernad, Lacramioara Dranca, Carlos Bobed Lisbona, and Jorge Gracia. 2023. No clues good clues: out of context lexical relation classification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5607-5625, Toronto, Canada. Association for Computational Linguistics.
|
| 320 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In NeurIPS.
|
| 321 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
|
| 322 |
+
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80.
|
| 323 |
+
Chao Shang, Sarthak Dash, Md. Faisal Mahbub Chowdhury, Nandana Mihindukulasooriya, and Alfio Gliozzo. 2020. Taxonomy construction of unseen domains via graph-based cross-domain knowledge transfer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2198-2208, Online. Association for Computational Linguistics.
|
| 324 |
+
Jiaming Shen, Zhihong Shen, Chenyan Xiong, Chi Wang, Kuansan Wang, and Jiawei Han. 2020. TaxoExpan: Self-supervised taxonomy expansion with position-enhanced graph neural network. In Proceedings of The Web Conference 2020, pages 486-497.
|
| 325 |
+
Ralf C Staudemeyer and Eric Rothstein Morris. 2019. Understanding LSTM-a tutorial into long short-term memory recurrent neural networks. arXiv preprint arXiv:1909.09586.
|
| 326 |
+
Kai Sun, Yifan Ethan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. 2023. Head-to-tail: How knowledgeable are large language models (llm)? aka will
|
| 327 |
+
|
| 328 |
+
llms replace knowledge graphs? arXiv preprint arXiv:2308.10168.
|
| 329 |
+
Raphael Tang, Xinyu Zhang, Jimmy Lin, and Ferhan Ture. 2023. What do llamas really think? revealing preference biases in language model representations. arXiv preprint arXiv:2311.18812.
|
| 330 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. LLaMA 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.
|
| 331 |
+
Ivan Vulić and Nikola Mrkšić. 2018. Specialising word vectors for lexical entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1134-1145, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 332 |
+
Ivan Vulic, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. Hyperlex: A large-scale evaluation of graded lexical entailment.
|
| 333 |
+
Yueqian Wang, Chang Liu, Kai Chen, Xi Wang, and Dongyan Zhao. 2022. SMASH: Improving SMAll language models' few-SHot ability with prompt-based distillation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6608-6619, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 334 |
+
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery; Data Mining, KDD '18. ACM.
|
| 335 |
+
Qingkai Zeng, Jinfeng Lin, Wenhao Yu, Jane Cleland-Huang, and Meng Jiang. 2021. Enhancing taxonomy completion with concept generation via fusing relational representations. In KDD, pages 2104-2113. ACM.
|
| 336 |
+
|
| 337 |
+
Jieyu Zhang, Xiangchen Song, Ying Zeng, Jiaze Chen, Jiaming Shen, Yuning Mao, and Lei Li. 2021. Taxonomy completion via triplet matching network. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4662-4670.
|
| 338 |
+
|
| 339 |
+
# A Using ChatGPT for Definition Generation and Automatic Error Analysis
|
| 340 |
+
|
| 341 |
+
Here below are two different example prompts 4 and 5 for ChatGPT for definition generation. The MAG PSY and MAG CS datasets for Taxonomy Enrichment, as well as ANT and HyperLex datasets for Lexical Entailment, do not possess definitions. Therefore, we developed several prompts specifically for the two types of datasets. For the hypernym prediction we want to have definitions for one input word, whereas for Lexical Entailment we expect to generate definitions for two words simultaneously, as they might be helpful for disambiguation. Table 7 represents statistics for the generated definitions for the datasets.
|
| 342 |
+
|
| 343 |
+
(4)
|
| 344 |
+
|
| 345 |
+
Write a definition for the word/phrase in one sentence.
|
| 346 |
+
|
| 347 |
+
Example:
|
| 348 |
+
|
| 349 |
+
Word: cackle
|
| 350 |
+
|
| 351 |
+
Definition: act as a caddie and carry clubs for a player
|
| 352 |
+
|
| 353 |
+
Word: eszopiclone 3 mg
|
| 354 |
+
|
| 355 |
+
Definition:
|
| 356 |
+
|
| 357 |
+
(5)
|
| 358 |
+
|
| 359 |
+
Write a definition for Word 1 and Word 2. Each definition should be in one sentence. If a word is ambiguous, use the other word to disambiguate it.
|
| 360 |
+
|
| 361 |
+
Example:
|
| 362 |
+
|
| 363 |
+
Word 1: depression
|
| 364 |
+
|
| 365 |
+
Word 2: melancholy
|
| 366 |
+
|
| 367 |
+
Definition 1: a mental state characterized by a pessimistic sense of inadequacy and a despondent lack of activity
|
| 368 |
+
|
| 369 |
+
Definition 2: a constitutional tendency to be gloomy and depressed
|
| 370 |
+
|
| 371 |
+
Word 1: conflict
|
| 372 |
+
|
| 373 |
+
Word 2: disagreement
|
| 374 |
+
|
| 375 |
+
<table><tr><td>Dataset</td><td>Total</td><td>Generated with ChatGPT</td><td>From Wikidata</td></tr><tr><td>MAG PSY</td><td>23,156</td><td>12,823</td><td>10,333</td></tr><tr><td>MAG CS</td><td>29,484</td><td>5,714</td><td>23,770</td></tr><tr><td>ANT</td><td>5,933</td><td>5,933</td><td>-</td></tr><tr><td>HyperLex</td><td>2,307</td><td>2,307</td><td>-</td></tr></table>
|
| 376 |
+
|
| 377 |
+
Table 7: Statistics on definitions generated with ChatGPT for different tasks.
|
| 378 |
+
|
| 379 |
+
Here is an example of the input to ChatGPT to automatically detect error types for hypernym prediction (Example 6) and the model output (Example 7). Furthermore, Example 8 demonstrates an example prompt for automatic classification.
|
| 380 |
+
|
| 381 |
+
(6)
|
| 382 |
+
|
| 383 |
+
Here is the data in four columns separated with "\t":
|
| 384 |
+
|
| 385 |
+
topic: psychology input word: social construction of schizophrenia
|
| 386 |
+
|
| 387 |
+
true: psychiatry, predictions: psychosis, social, psychological, and philosophical aspects, social psychology, psych, schizophrenia, antipsychotic, psychotic, antipsychotic
|
| 388 |
+
|
| 389 |
+
topic: psychology input word: use heroin true: methadone maintenance, predictions: heroin, opium tincture, heroin, opium, tincture, heroin, opium, and morphine, man, heroin, drug addiction
|
| 390 |
+
|
| 391 |
+
.
|
| 392 |
+
|
| 393 |
+
topic: psychology input word: coordinated management of meaning true: social psychology, predictions: communication disorders, language and literacy, and social studies, language and literacy, metaphor, metonymy, and politics, metonymy, metaphor
|
| 394 |
+
|
| 395 |
+
There is a topic, input word, correct hypernym (true), and model prediction (predicted). Your task is to compare true answers and model predictions and summarize what are the main error types of the model.
|
| 396 |
+
|
| 397 |
+
(7)
|
| 398 |
+
|
| 399 |
+
To summarize the main error types of the model's predictions compared to the true hypernyms:
|
| 400 |
+
|
| 401 |
+
1. Overly Broad Predictions: The model often generates predictions that encompass a broader concept than the true hypernym.
|
| 402 |
+
|
| 403 |
+
For example:
|
| 404 |
+
|
| 405 |
+
"medication effects" was predicted with terms like "drug effects", "drug interactions", and "medication", which are broader than the true hypernym "clinical psychology" and "psychiatry."
|
| 406 |
+
|
| 407 |
+
2. Underly Broad Predictions: Conversely, some predictions are too narrow and fail to capture the broader concept represented by the true hypernym.
|
| 408 |
+
|
| 409 |
+
For example:
|
| 410 |
+
|
| 411 |
+
"schizoid personality" was predicted with terms like "personality disorders" and "personality theory," which are narrower than the true hypernym "psychosis."
|
| 412 |
+
|
| 413 |
+
3. Inclusion of Unrelated Terms: In some cases, the model includes terms in its predictions that are not directly related to the input word or true hypernym.
|
| 414 |
+
|
| 415 |
+
For example:
|
| 416 |
+
|
| 417 |
+
"pirisudanol" was predicted with terms like "dopamine receptor d1" and "dopamine receptor d2", which are unrelated to the true hypernym "psychiatry".
|
| 418 |
+
|
| 419 |
+
4. Repetition of Terms: The model occasionally repeats terms in its predictions, which might indicate redundancy or lack of diversity in its output.
|
| 420 |
+
|
| 421 |
+
For example:
|
| 422 |
+
|
| 423 |
+
"pirisudanol" was predicted with repeated terms like "piracetam" and "hydergine."
|
| 424 |
+
|
| 425 |
+
5. Inadequate Coverage of Concepts: Some input words and true hypernyms receive predictions that lack comprehensive coverage of related concepts.
|
| 426 |
+
|
| 427 |
+
For example:
|
| 428 |
+
|
| 429 |
+
"result category" was predicted with terms like "item response theory" and "qualitative research," which might not fully encapsulate the scope of the true hypernym "mathematics education."
|
| 430 |
+
|
| 431 |
+
(8)
|
| 432 |
+
|
| 433 |
+
You are a helpful linguist. Your task is to do error classification for a dataset for hypernym prediction in a general domain. Here are possible classes:
|
| 434 |
+
|
| 435 |
+
1. Overly Broad Predictions: The model often generates predictions encompassing a broader concept than the true hypernym.
|
| 436 |
+
2. Underly Broad Predictions: Conversely, some predictions are too narrow and fail to capture the broader concept represented by the true hypernym.
|
| 437 |
+
3. Inaccurate Predictions: The model may predict words that are very semantically close to the true hypernym, but struggles with fitting into the exact wording
|
| 438 |
+
4. Conceptual Ambiguity: The model may struggle with ambiguous (polysemantic/multivalued) input words or concepts, leading to incorrect predictions.
|
| 439 |
+
5. Incorrect definitions: The model gets confused with the incorrect/inaccurate definition retrieved from external sources
|
| 440 |
+
|
| 441 |
+
You will be given an input word/phrase, true hypernym, and candidate hypernyms. Please, return a Python dict of error classes {1: 1, 2: 5, 3: 1, ..., 100:3}) for all instances below:
|
| 442 |
+
|
| 443 |
+
id: 1, input word: parathyroid_hormone, true hypernym: hormone, predicted: hormonal agent, hormon, hematopoietic growth factor, growth factor of the blood, growth regulator, growth substance, growth ...
|
| 444 |
+
|
| 445 |
+
id: 100, input word: proofreader, true hypernym: printer, predicted: reader, audience, audience member, spectator, viewer, listener, listener-in, hearer, recipient, witness, observer
|
| 446 |
+
|
| 447 |
+
Here is the prompt Example 9 for ChatGPT in order to automatically evaluate TaxoLLaMA results, as manual analysis has shown that the gold true answers from MAG PSY and MAG CS datasets might not be of a good quality either. Therefore, ChatGPT was required to choose between the true answer from the dataset and the predicted candidate.
|
| 448 |
+
|
| 449 |
+
(9)
|
| 450 |
+
|
| 451 |
+
Here are the words in the psychological domain. Your task is to choose hypernym which is more relevant given two options. Answer 1 / 2 / both / none
|
| 452 |
+
|
| 453 |
+
Example:
|
| 454 |
+
|
| 455 |
+
social construction of schizophrenia
|
| 456 |
+
|
| 457 |
+
option 1: psychosis
|
| 458 |
+
|
| 459 |
+
option 2: psychiatry
|
| 460 |
+
|
| 461 |
+
Answer: 2
|
| 462 |
+
|
| 463 |
+
abdominal air sac
|
| 464 |
+
|
| 465 |
+
option 1: air sac
|
| 466 |
+
|
| 467 |
+
option 2: trachea
|
| 468 |
+
|
| 469 |
+
Answer:
|
| 470 |
+
|
| 471 |
+
<table><tr><td>Error Type</td><td>Description</td></tr><tr><td>Overly Broad Predictions</td><td>The model often generates predictions that encompass a broader concept than the true hypernym.</td></tr><tr><td>Underly Broad Predictions</td><td>Some predictions are too narrow and fail to capture the broader concept represented by the true hypernym</td></tr><tr><td>Inclusion of Unrelated Terms</td><td>In some cases, the model includes terms in its predictions that are not directly related to the input word or true hypernym.</td></tr><tr><td>Repetition of Terms</td><td>The model occasionally repeats terms in its predictions, which might indicate redundancy or lack of diversity in its output.</td></tr><tr><td>Inadequate Coverage of Concepts</td><td>Some input words and true hypernyms receive predictions that lack comprehensive coverage of related concepts</td></tr><tr><td>Semantic Shift</td><td>The model might exhibit errors related to semantic shift, where the predicted terms are semantically related to the input word but do not accurately reflect the intended meaning or context.</td></tr><tr><td>Conceptual Ambiguity</td><td>The model may struggle with ambiguous input words or concepts, leading to predictions that lack clarity or specificity.</td></tr><tr><td>Domain-Specific Knowledge</td><td>Errors may arise due to a lack of domain-specific knowledge or understanding of specialized terminology.</td></tr><tr><td>Cultural or Contextual Bias</td><td>The model's predictions may be influenced by cultural or contextual biases inherent in the training data. This could lead to inaccuracies, especially when dealing with topics or concepts that vary across cultures or contexts.</td></tr><tr><td>Incomplete Understanding of Relationships</td><td>The model may struggle to understand complex relationships between concepts, leading to inaccurate predictions.</td></tr><tr><td>Word Sense Disambiguation</td><td>Errors may occur due to difficulties in disambiguating between different senses of a word.</td></tr><tr><td>Knowledge Gap</td><td>The model's predictions may reflect gaps in its knowledge or understanding of certain concepts, resulting in inaccurate or incomplete responses.</td></tr></table>
|
| 472 |
+
|
| 473 |
+
Table 8: 12 Error types made by TaxoLLaMA for hypernym prediction detected by ChatGPT.
|
| 474 |
+
|
| 475 |
+
# B Error Type Analysis
|
| 476 |
+
|
| 477 |
+
This section represents the error types distribution across different datasets for hypernym prediction: Hypernym Discovery and Taxonomy Enrichment in Table 9. Moreover, in Table 10 we provide an example
|
| 478 |
+
|
| 479 |
+
for each error type that was classified by ChatGPT.
|
| 480 |
+
|
| 481 |
+
<table><tr><td></td><td>1A: English</td><td>2A: Medical</td><td>2B: Music</td><td>1B: Italian</td><td>1C: Spanish</td><td>MAG-CS</td><td>MAG-PSY</td><td>Noun</td><td>Verb</td></tr><tr><td>Error 1</td><td>72.49%</td><td>93.75%</td><td>100.0%</td><td>54.69%</td><td>49.08%</td><td>66.48%</td><td>85.43%</td><td>81.45%</td><td>73.39%</td></tr><tr><td>Error 2</td><td>2.61%</td><td>0.00%</td><td>0.0%</td><td>10.03%</td><td>10.62%</td><td>5.40%</td><td>1.40%</td><td>4.10%</td><td>2.58%</td></tr><tr><td>Error 3</td><td>9.44%</td><td>1.25%</td><td>0.0%</td><td>27.83%</td><td>12.09%</td><td>13.21%</td><td>11.20%</td><td>6.05%</td><td>21.03%</td></tr><tr><td>Error 4</td><td>15.46%</td><td>5.00%</td><td>0.0%</td><td>7.44%</td><td>18.21%</td><td>10.94%</td><td>0.70%</td><td>6.64%</td><td>0.43%</td></tr><tr><td>Error 5</td><td>0.00%</td><td>0.00%</td><td>0.00%</td><td>0.00%</td><td>0.00%</td><td>3.98%</td><td>1.26%</td><td>1.76%</td><td>2.58%</td></tr></table>
|
| 482 |
+
|
| 483 |
+
Table 9: Errors Type Distribution across subset datasets for hypernym prediction: Hypernym Discovery and Taxonomy Enrichment.
|
| 484 |
+
|
| 485 |
+
<table><tr><td>Error Type</td><td>Input</td><td>True</td><td>Predicted</td></tr><tr><td>Overly Broad Predic-tions</td><td>weed</td><td>vascular plant</td><td>plant, plant life, vegetable, vegetation, vascular plant, phanerogam, green plant, green thing, plant of the class, plant of the division</td></tr><tr><td>Underly Broad Pre-dictions</td><td>audio</td><td>auditory communica-tion</td><td>signal, transmission, transmis-sion signal, communication, communication signal, mes-sage, message signal, informa-tion, information signal, data, data signal, notification, notifi-cation signal, signaling</td></tr><tr><td>Inaccurate Predi-c-tions</td><td>unconventionality</td><td>unorthodoxy</td><td>unorthodoxiness, unorthodoxy, unconformity, heterodoxy, dis-sidence, disobedience, disloy-alty, dis, disaffection, dissent, disagreement</td></tr><tr><td>Conceptual Ambiguity</td><td>giant red paintbrush</td><td>indian paintbrush</td><td>paintbrushrush, paint brush, larkspur, delphinium, liaceous plant, liliid monocot, liloid</td></tr><tr><td>Incorrect definitions</td><td>advance directive sta-tus</td><td>documentation</td><td>advance directive, directive, de-cision support system, clinical decision support system, health information technology</td></tr><tr><td colspan="4">Definition: A do-not-resuscitate order (DNR), also known as Do Not Attempt Resuscitation (DNAR), Do Not Attempt Cardiopulmonary Resuscitation (DNACPR)</td></tr></table>
|
| 486 |
+
|
| 487 |
+
Table 10: Examples for each Error type made by TaxoLLaMA for hypernym prediction detected by ChatGPT.
|
| 488 |
+
|
| 489 |
+
# C Distribution Visualization for Taxonomy Enrichment
|
| 490 |
+
|
| 491 |
+
In this section, we delve into the distribution patterns of ground truth and model predictions within the embedding space of the SentenceBert model (Reimers and Gurevych, 2019). To achieve this, we initiated two separate model runs, each with a distinct seed, aiming to capture the model's variability. Subsequently, we extracted the predicted candidates and the ground truth hypernyms, mapping them into the embedding space provided by SentenceBert. To facilitate a clearer visual analysis, we condensed the embedding
|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
Figure 6: t-SNE plot of distributions of ground truth nodes and predicted nodes for taxonomy enrichment tasks. Each point represents a node, embedded with SentenceBert. Color represents ground truth or model predictions (we ran 2 predictions with different seeds)
|
| 495 |
+
|
| 496 |
+

|
| 497 |
+
|
| 498 |
+

|
| 499 |
+
|
| 500 |
+

|
| 501 |
+
|
| 502 |
+

|
| 503 |
+
|
| 504 |
+
dimensions to 50 using Principal Component Analysis and then applied t-SNE to project these dimensions onto two principal components for visualization.
|
| 505 |
+
|
| 506 |
+
The findings, illustrated in Figure 6, reveal a distinct pattern between WordNet and the MAG subsets (MAG_CS and MAG_PSY). WordNet displays a notable overlap between the gold standard and predictions, despite a few outliers that are presumably lower-ranked candidates. Conversely, the MAG subsets exhibit different behavior, forming two slightly overlapping clusters in the embedding space, suggesting a divergence between predictions and ground truths. Additionally, these subsets contain more outliers, indicating instances where the model may have completely missed the accurate hypernym sense. It's important to consider, however, that the SentenceBert model's representations could contribute to these discrepancies, especially for concepts that are not well-represented in its training data.
|
| 507 |
+
|
| 508 |
+
# D Hyperlex Correlation Analysis
|
| 509 |
+
|
| 510 |
+

|
| 511 |
+
Figure 7: Correlation plot of the perplexion ranks with the annotator's score on Hyperlex test sets. The line over the dots is a trend found with linear regression. * shows that correlation has a p-value lower than $1e^{-4}$ .
|
| 512 |
+
|
| 513 |
+
We also examine correlations using traditional methods for both test sets (refer to Figure 7). By overlaying the linear regression trend on the observed data points, a distinct trend emerges. However, this
|
| 514 |
+
|
| 515 |
+
trend is notably impacted by outliers, particularly within the Random set. This observation aligns with findings from taxonomy construction, highlighting the model's challenges in accurately handling middle nodes or pairs exhibiting moderate entailment strength.
|
| 516 |
+
|
| 517 |
+
When analyzing gold scores ranging from 2 to 8, the Random set displays a lack of discernible trend, underscoring the model's inconsistency in this area. The Lexical set shows a slightly better trend in that area. However, with both sets pairs characterized by strong entailment or minimal entailment are more accurately categorized. This distinction crucially enhances the overall correlation, which leads to a promising correlation score.
|
| 518 |
+
|
| 519 |
+
# E Hypernym motivation
|
| 520 |
+
|
| 521 |
+
Inspired by recent advancements in semantic analysis, particularly the work (Nikishina et al., 2023) on hyponym prediction, our study shifts focus towards hybernym prediction for several compelling reasons. First, predicting hybernyms is crucial for tasks such as taxonomy enrichment and hybernym discovery. Second, the formulation of a loss component for hybernym prediction is more straightforward, as most entities typically have a single correct hybernym, unlike hyponyms, where multiple valid options exist. This necessitates either adjustments to the loss function or extensive dataset collection and analysis.
|
| 522 |
+
|
| 523 |
+
Furthermore, our experimentation with various prompts revealed that the most effective format is detailed in the main section. Alternate prompts, which adopted a more narrative style (e.g., "Given a hyponym 'tiger', the hypernym for it is"), led to the model generating paragraphs instead of concise hypernym lists. Adjustments to the system prompt failed to rectify this. Notably, appending a comma to the end of the target sequence remarkably improved the model output, encouraging it to list hypernyms instead of producing narrative text.
|
| 524 |
+
|
| 525 |
+
In addressing the disambiguation challenge, we experimented with incorporating definitions or technical identifiers from WordNet into the prompts. Definitions proved more effective, likely owing to the model pre-training on textual data. Attempts to generate hypernyms with specific WordNet codes resulted in the model appending the same numerical identifier to each hyphenym which also resulted in lower scores.
|
| 526 |
+
|
| 527 |
+
# F Hyperparameter motivation
|
| 528 |
+
|
| 529 |
+
Our analysis revealed the model acute sensitivity to the learning rate and scheduler settings. The feasibility of employing a high learning rate in the primary study was contingent upon the use of the LORA adapter, which modulates weights without significant alterations. However, during full model fine-tuning, we faced instabilities, manifesting as either overfitting or underfitting—highlighting the necessity for further technical exploration into optimal hyperparameter configurations. Additionally, the implementation of 4-bit quantization requires careful learning rate selection, as this process notably compresses the weight distribution, demanding strategies to effectively recover the model knowledge thereafter.
|
| 530 |
+
|
| 531 |
+
In the fine-tuning process, we deliberately chose a smaller batch size to better accommodate the model to datasets, which are often limited in sample size. Contrary to our expectations, increasing the learning rate and batch size did not yield improved performance; this outcome can primarily be attributed to the reduced number of steps the model takes toward adapting to the specific domain. This strategy, however, did not apply to WordNet pre-training, where we observed differing trends.
|
| 532 |
+
|
| 533 |
+
Apart from certain instruction tuning methodologies, our approach does not involve calculating loss including on the instruction itself. Instead, loss calculation is confined solely to the target tokens.
|
| 534 |
+
|
| 535 |
+
The experiments utilized Nvidia A100 or Quadro RTX 8000 GPUs. Pre-training for TaxoLLaMA and TaxoLLaMA-bench spanned 6 GPU hours, while TaxoLLaMA-verb required less than 1 hour. Fine-tuning for MAG subsets took 5 GPU hours, attributed to the lengthy definitions. Fine-tuning for other datasets was completed in under an hour.
|
2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4587221fa1a6174956ce165a18ba3c37884feaf61a4d3a293ccd5b0d28b12a08
|
| 3 |
+
size 992241
|
2024/TaxoLLaMA_ WordNet-based Model for Solving Multiple Lexical Semantic Tasks/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/ef465396-e2c2-470e-b46d-fc568784c577_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/ef465396-e2c2-470e-b46d-fc568784c577_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/ef465396-e2c2-470e-b46d-fc568784c577_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:239bda06fdfafba2ad9635458fa3ee7435197d5c4566ca8c29cd0dc5772d9f6a
|
| 3 |
+
size 1466071
|
2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cfbce0bf95aa1a7192dca6ee9da9a7ca87491f5c597e47461018b84fbf943311
|
| 3 |
+
size 915480
|
2024/Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/17c242f9-b044-448b-b875-ed886c7a216b_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/17c242f9-b044-448b-b875-ed886c7a216b_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/17c242f9-b044-448b-b875-ed886c7a216b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c7f365f5d5ca29bfee92925c9317ae4edb22bc6cc384eabee6aee5a98c132a43
|
| 3 |
+
size 1392165
|
2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/full.md
ADDED
|
@@ -0,0 +1,569 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”?
|
| 2 |
+
|
| 3 |
+
Tong Liu $^{1,2}$ , Iza Škrjanec $^{3}$ , Vera Demberg $^{3,4}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ LMU Munich $^{2}$ Munich Center for Machine Learning $^{3}$ Saarland University $^{4}$ Max Planck Institute for Informatics, Saarland Informatics Campus tongliuphysics@gmail.com, {skrjanec,vera} $@$ coli.uni-saarland.de
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
A wide body of evidence shows that human language processing difficulty is predicted by the information-theoretic measure surprisal, a word's negative log probability in context. However, it is still unclear how to best estimate these probabilities needed for predicting human processing difficulty – while a long-standing belief held that models with lower perplexity would provide more accurate estimates of word predictability, and therefore lead to better reading time predictions, recent work has shown that for very large models, psycholinguistic predictive power decreases. One reason could be that language models might be more confident of their predictions than humans, because they have had exposure to several magnitudes more data. In this paper, we test what effect temperature-scaling of large language model (LLM) predictions has on surprisal estimates and their predictive power of reading times of English texts. Firstly, we show that calibration of large language models typically improves with model size, i.e. poorer calibration cannot account for poorer fit to reading times. Secondly, we find that temperature-scaling probabilities lead to a systematically better fit to reading times (up to $89\%$ improvement in delta log likelihood), across several reading time corpora. Finally, we show that this improvement in fit is chiefly driven by words that are composed of multiple subword tokens. $^{1}$
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
In psycholinguistics, a key finding is that words with higher surprisal (= negative log probability of the word in context) require more time for processing (Hale, 2001; Levy, 2008). Numerous studies provided experimental evidence supporting this theory, demonstrating that surprisal is a powerful predictive measure of processing complexity (e.g., Demberg and Keller, 2008; Wilcox et al., 2020,
|
| 14 |
+
|
| 15 |
+
2023; Shain et al., 2022), and that the relationship between surprisal and reading times (RTs) seems to be linear (Smith and Levy, 2013; Wilcox et al., 2020; Shain et al., 2022).
|
| 16 |
+
|
| 17 |
+
However, prior work implicitly made the assumption that human predictability estimates would be similar to the actual probability of a word occurring in a given context, and that therefore, surprisal values estimated from models that achieve lower perplexities should also approximate human processing difficulty better (Goodkind and Bicknell, 2018; Merkx and Frank, 2021).
|
| 18 |
+
|
| 19 |
+
Recent research has however found that this is not true – surprisal values from very large LLMs provide in fact a very poor fit to reading times. Oh and Schuler (2023b) hypothesize that this might be due to LLMs being “too confident” in their estimates of rare named entities compared to humans, thanks to their manifold larger exposure to data and greater memory capacity compared to humans. Furthermore, work on NLP applications like question answering has reported that probability estimates from pretrained language models are often overconfident, i.e. they are higher than the ground truth probability (Si et al., 2022; Kumar, 2022). These findings hence beg the question whether current LLMs are well-calibrated with respect to “objective” word occurrence probabilities. Relatedly, we ask whether LLM probability estimates are overconfident compared to human estimates (as observed in reading times).
|
| 20 |
+
|
| 21 |
+
One approach to address calibration problems is to use temperature scaling, as done e.g., in vision tasks (Guo et al., 2017; Hendrycks et al., 2019). Temperature-scaling with a temperature $T > 1$ has the effect that the probability distribution is flattened such that it becomes more similar to a uniform distribution. Temperature-scaling hence incorporates uncertainty into the probability estimates from LLMs.
|
| 22 |
+
|
| 23 |
+
We note that the idea to work with flattened dis
|
| 24 |
+
|
| 25 |
+
tributions instead of the original probability distributions from LLMs is also related to contextual Rényi Entropy as discussed by Pimentel et al. (2023), as well as the super/sub-linear surprisal effect by Shain et al. (2022); Hoover et al. (2023). However, rather than merely adjust the power of surprisal in super/sub-logarithmic patterns or the power of probability in Rényi entropy, our work represents a distinct branch of study (i.e., probability calibration) in machine learning: shaping the probability distribution itself through shaping the logits before softmax. We also discuss the motivation for why a slightly flattened distribution may be more suitable, and whether this change in distribution is applied when calculating surprisal vs. when calculating entropy.
|
| 26 |
+
|
| 27 |
+
Our experimental results show that scaling probabilities can largely improve the fit to reading times in all 12 settings (3 corpora $\times$ 4 neural LMs). Our contributions are summarized as follows: (1) We propose temperature-scaled surprisal, where surprisal is calculated from temperature-scaled probabilities. (2) We demonstrate that temperature-scaling with temperature $T\approx 2.5$ improves predictability of human reading times of English texts compared to $T = 1$ . (3) We identify linguistic phenomena that correlate with the benefit of temperature-scaled surprisal by analyzing residual errors from regression models.
|
| 28 |
+
|
| 29 |
+
# 2 Predictive Power for Reading Times
|
| 30 |
+
|
| 31 |
+
In psycholinguistics, RTs on a word are believed to correlate with its processing difficulty. RTs can be gathered using different paradigms, including eyetracking while reading text on a screen (Rayner, 1998), self-paced reading (Aaronson and Scarborough, 1976; Mitchell and Green, 1978) and the Maze task (Forster et al., 2009).
|
| 32 |
+
|
| 33 |
+
The most common procedure for predicting words' RT is first to select a set of predictor variables thought to impact RTs $\mathbf{v} = [v^{(1)},\dots,v^{(d)}]^{\top}\in$ $\mathbb{R}^d$ , which include, e.g., the length of a word $w_{t},|w_{t}|$ , the frequency of a word $\mathrm{freq}(w_t)$ .Let $f_{\phi}:\mathbb{R}^{d}\to \mathbb{R}$ be a regression model parametrized by $\phi$ used to fit these predictors for the prediction of human RTs rt: $rt(w_{t}|w_{< t})\sim f_{\phi}(\mathbf{v})$ , given the previous context $\pmb{w}_{< t}$ . The performance of $f_{\phi}$ is quantified by its log-likelihood, with a higher log-likelihood indicating a better psychometric predictive power for human RTs (Frank and Bod, 2011; Fossum and Levy, 2012).
|
| 34 |
+
|
| 35 |
+
Besides the word length $|w_t|$ and word frequency $\mathrm{freq}(w_t)$ , a word's surprisal (i.e., its negative log-probability in context) (Hale, 2001; Levy, 2008) has been shown to be predictive of RTs (Demberg and Keller, 2008; Goodkind and Bicknell, 2018; Wilcox et al., 2020; Shain et al., 2022).
|
| 36 |
+
|
| 37 |
+
# 3 Methods
|
| 38 |
+
|
| 39 |
+
In this section, we delve into key aspects of information-theoretic measures in language comprehension. We start with surprisal, a method connecting processing difficulty to word predictability. As word predictability is empirically estimated by LLMs, we introduce the notion of calibration errors, metrics quantifying how good the estimation of word predictability is. Further, we lay out temperature-scaled surprisal, and the relation between varying temperature vs. varying $\alpha$ in contextual Rényi entropy.
|
| 40 |
+
|
| 41 |
+
# 3.1 Surprisal
|
| 42 |
+
|
| 43 |
+
Starting from Shannon (1948), the information conveyed by a word $w_{t}$ has been quantified as the negative log probability of the word $w_{t}$ given its previous context $\pmb{w}_{< t}$ . In Surprisal Theory (Hale, 2001; Levy, 2008), this quantity is called surprisal $s(w_{t})$ and proposed to be predictive of the word's processing difficulty, typically quantified as its RT. Surprisal values are typically estimated from language models $\hat{p}(w_{t} | \pmb{w}_{< t})$ .
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
s \left(w _ {t}\right) = - \log_ {2} p \left(w _ {t} \mid \boldsymbol {w} _ {< t}\right), \tag {1}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
# 3.2 Calibration error
|
| 50 |
+
|
| 51 |
+
**Definitions** Let $\mathcal{D} = \{(x_i, y_i)\}_i^N$ be a data set where $x_i \in \mathcal{X}$ is an sample (i.e., context) and $y_i \in \mathcal{K} = [K]$ is a category label. Let $g_\theta$ and $\hat{\mathbf{z}}_i = g_\theta(x_i)$ denote a language model parametrized by $\theta$ and the output logit vector of sample $i$ , respectively. The predicted class label $\hat{y}_i$ for sample $i$ is given by $\hat{y}_i = \arg \max_{k \in \mathcal{K}} g(x_i)_k$ and confidence for sample $i$ is given by $\hat{p}_i = \max_{k \in \mathcal{K}} g(x_i)_k$ . A model is perfectly calibrated when the confidence $\hat{p}$ is equal to the frequency of correctness, i.e., $\mathbb{P}(\hat{y}_i = y_i | \hat{p}_i = p) = p$ holding for all $p \in [0,1]$ and any sample $i$ . Any difference between the left and right sides of the above equation indicates there exists a calibration error.
|
| 52 |
+
|
| 53 |
+
Expected calibration error (ECE) (Guo et al., 2017) ECE is the most popular calibration metric, which empirically approximates the calibration
|
| 54 |
+
|
| 55 |
+
error by discretizing the probability interval into a fixed number of bins $(B_{m}$ with $m\in \{1,2,\dots,M\})$ and measures the gaps of averaged confidence and averaged accuracy in each bin $B_{m}$ .
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\mathrm {E C E} = \frac {1}{N} \sum_ {m = 1} ^ {M} | \sum_ {i \in B _ {m}} \hat {p} _ {i} - \sum_ {i \in B _ {m}} \mathbb {1} [ \hat {y} _ {i} = y _ {i} ] |, (2)
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where $\mathbb{1}$ is the indicator function. However, it does not necessarily measure the actual-word probability, which is the probability required for calculating surprisal in Eq. 1. It focuses only on the top-label probability for a given sample.
|
| 62 |
+
|
| 63 |
+
Classwise-ECE (CECE) (Kumar et al., 2019; Kull et al., 2019) In comparison, CECE measures probabilities of all classes. For each bin and every class $k$ , it assesses the difference between the average confidence of samples for class $k$ and the actual proportion of class $k$ . If assuming all classes weigh equally, we have:
|
| 64 |
+
|
| 65 |
+
CECE
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
= \frac {1}{N K} \sum_ {k = 1} ^ {K} \sum_ {m = 1} ^ {M} \left| \sum_ {i \in B _ {m}} \hat {p} _ {i, k} - \sum_ {i \in B _ {m}} \mathbb {1} [ k = y _ {i} ] \right|, \tag {3}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where $\hat{p}_{i,k}$ is the predicted probability of sample $i$ for class $k$ .
|
| 72 |
+
|
| 73 |
+
Human-likeness calibration error (HCE) We define the HCE as the Kullback-Leibler divergence (KL divergence) between predicted probability $\hat{p}$ from a neural LM and actual probability $p^*$ of human language model.
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\mathrm {H C E} = D _ {K L} \left(\hat {\boldsymbol {p}} \right\rVert \boldsymbol {p} ^ {*}). \tag {4}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
Empirically, since $p^*$ is not directly observable, we approximate it by the estimates of a temperature-scaled model that best fits human reading times (as discussed later). We denote the approximated HCE using such a method as $\mathrm{HCE}_{\mathrm{TS}}$ .
|
| 80 |
+
|
| 81 |
+
# 3.3 Temperature-scaled surprisal
|
| 82 |
+
|
| 83 |
+
Temperature scaling (Guo et al., 2017) is a widely-used method to improve model calibration. Given the output logit vector $\hat{\mathbf{z}}_i$ for sample $i$ , a single scalar $T > 0$ is applied to rescale $\hat{\mathbf{z}}_i$ before the softmax activation:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\hat {q} _ {i} = \max _ {k} \sigma_ {S M} \left(\frac {\hat {\mathbf {z}} _ {i}}{T}\right) ^ {(k)}, \tag {5}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $\hat{q}_i$ is the calibrated confidence for sample $i$ , and $\sigma_{SM}$ is the softmax function. Scaling by a scalar $T$ does not alter the ranking; hence, the predicted label $\hat{y}_i$ remains unchanged. As $T > 1$ , it "softens" the probability distribution (i.e., makes the distribution more uniform), increasing uncertainty and entropy of the probability distribution, while $T < 1$ peaks the distribution. The parameter $T$ in research on calibration is optimized by minimizing the negative log-likelihood on the validation set. In our experiments of fit to human RTs, we manually tune this temperature with $T > 1$ .
|
| 90 |
+
|
| 91 |
+
Temperature scaling has been successfully applied in several applications: In knowledge distillation (Hinton et al., 2015), temperature scaling (with $T > 1$ ) is used to "soften" the knowledge (i.e., probability distribution) provided by the teacher model; in text generation, temperature is used to shape the probability distribution to ease certain aspects of the problems of top-k sampling (e.g., choosing an appropriate $k$ value across varying contexts) (Ficler and Goldberg, 2017; Fan et al., 2018). Temperature tuning inherently shifts the model's output in the generation's quality/diversity spectrum (Caccia et al., 2018), with higher temperature decreasing the quality of generation while improving its diversity. This also aligns with our consideration of a possibility that human probability distributions might be flatter than the ones learned by language models and thus increasing the predictive diversity of surprisal provided by LLMs could potentially yield more human-like distributions.
|
| 92 |
+
|
| 93 |
+
Given Eq. 5, temperature-scaled surprisal is:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
s _ {T} \left(w _ {t}, T\right) = - \log_ {2} \left(\sigma_ {S M} \left(\hat {\mathbf {z}} _ {w _ {t}} / T\right) ^ {\left(k ^ {*}\right)}\right), \tag {6}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $\hat{\mathbf{z}}_{w_t}$ and $k^* = y_{w_t}$ denote the logit vector and the actual word $w_t$ class, respectively. For given $t\in (0,\infty)$ , we simply denote $s_T(w_t,T = t)$ as $s_T|_{T = t}$ . A temperature $T$ with its best performance of final fit to RTs is denoted as $T^{*}$ .
|
| 100 |
+
|
| 101 |
+
The extent to which a word's surprisal is affected by temperature scaling depends on the distribution and thus correlates with the entropy at word $w_{t}$ . Consider an example of two five-class probability distributions $\pmb{p}_{i} = [0.8, 0.05, 0.05, 0.05, 0.05]$ and $\pmb{p}_{j} = [0.8, 0.2, 0, 0, 0]$ , for which the word indicated by the first position in the probability vector has identical surprisal in both $\pmb{p}_{i}$ and $\pmb{p}_{j}$ . Notably, $\pmb{p}_{i}$ is more uniform and $\pmb{p}_{j}$ is more peaked, resulting in distinct entropy
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 1: Temperature-scaled surprisal $s_{T}(w_{t}, T)$ with corresponding $T \in [1,2.5]$ for two random five-class probability distributions: $p_{i} = [0.8,0.05,0.05,0.05,0.05]$ and $p_{j} = [0.8,0.2,0,0,0]$ . Dashed lines show Shannon entropy $(\mathrm{H}_{1})$ . Loosely dashed lines show Rényi entropy with $\alpha = 1/2(\mathrm{H}_{1/2})$ .
|
| 105 |
+
|
| 106 |
+
characteristics: $\mathrm{H}(w_i|\pmb{w}_{<i}) > \mathrm{H}(w_j|\pmb{w}_{<j})$ , where the entropy defined as the expectation of surprisal of current word $w_t$ over vocabulary, $\mathrm{H}(w_t|\pmb{w}_{<t}) = \mathbb{E}_{w'\sim p(\cdot|\pmb{w}_{<t})}[s(w')] = -\sum_{w' \in \overline{\mathcal{W}}} p(w'|w_{<t})\log_2p(w'|w_{<t})$ , where $\overline{\mathcal{W}} = \mathcal{W} \cup \{\mathrm{EOS}\}$ denotes the set of vocabulary $\mathcal{W}$ with EOS token. Fig. 1 illustrates a greater increase in surprisal for a word with a more uniform distribution than with a more peaked distribution.
|
| 107 |
+
|
| 108 |
+
This figure also anecdotally shows that the effect of applying temperature scaling with $T > 1$ is similar to the effect of setting $\alpha < 1$ in Rényi entropy. We will discuss the relationship between these parameters in more detail in Appendix A.
|
| 109 |
+
|
| 110 |
+
# 4 Experimental setup
|
| 111 |
+
|
| 112 |
+
# 4.1 Datasets
|
| 113 |
+
|
| 114 |
+
We conduct analyses on two self-paced reading corpora, the Natural Stories Corpus (Futrell et al., 2018) and the Brown Corpus (Smith and Levy, 2013), as well as on the Dundee Corpus (Kennedy et al., 2003), which contains the eye-movement record; our analyses in this paper focus on first-pass times $^{2}$ from the Dundee corpus. We follow previous work with respect to the preprocessing steps for each corpus (Kuribayashi et al., 2022; Shain et al., 2022). Appendix C includes details about the preprocessing steps of each corpus.
|
| 115 |
+
|
| 116 |
+
# 4.2 Language Models
|
| 117 |
+
|
| 118 |
+
Recent observations showed that surprisal provided by LLMs with more parameters and lower perplexity is less predictive of self-paced reading times and eye-gaze durations (Shain et al., 2022; Oh and Schuler, 2023b); across different experiments, GPT-2 (Radford et al., 2019) surprisesals were found to predict human RTs the best. Therefore, we take four variants of pretrained GPT-2 (small, medium, large, xl) as our language models in all experiments. Following prior work, we obtain the surprisal for words composed of more than one subword by summing up the surprisal estimates of the subwords.
|
| 119 |
+
|
| 120 |
+
# 4.3 Metrics and evaluation
|
| 121 |
+
|
| 122 |
+
We measure the predictive power of surprisal estimates from different language models, which is denoted as the log-likelihood difference per data point between a linear mixed-effects (LME) regression model using lme4 package (Bates et al., 2015) with a predictor of surprisal estimates (target model) and a model without surprisal (base model), following Goodkind and Bicknell (2018); Wilcox et al. (2020). More specifically, the metric of delta log-likelihood is defined as:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\Delta_ {\mathrm {l l h}} = \mathrm {l l h} (f _ {\phi} (\mathbf {v} ^ {t g t})) - \mathrm {l l h} (f _ {\phi} (\mathbf {v} ^ {b a s e})), \quad (7)
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $\mathbf{v}^{tgt}$ is target predictor variables that include baseline predictor variables as well as predictor variables of our interest, such as surprisal or temperature-scaled surprisal. $\mathbf{v}^{base}$ is base predictor variables only including baseline predictor variables. The greater the value of $\Delta_{\mathrm{lh}}$ ,the more valuable the additional surprisal estimates are for predicting human reading times.
|
| 129 |
+
|
| 130 |
+
For the calibration error evaluation, we set the number of bins $M$ to 15 for both ECE and CECE, aligning with prior literature, such as works by Guo et al. (2017); Kumar et al. (2019); Rahimi et al. (2020b), to ensure consistency in addressing problems where comparable probability ranges are relevant. The calibration metrics (ECE and CECE) are evaluated separately on each of the reading time corpus $\mathcal{D}$ . For simplicity, our calibration evaluation is conducted at the token level. Given that many words have extremely low probabilities and thus are often grouped into a single bin, we also evaluate the calibration error under the log probability binning scheme. For further descriptions regarding the metrics and evaluation, see Appendix D.
|
| 131 |
+
|
| 132 |
+
# 5 Results
|
| 133 |
+
|
| 134 |
+
# 5.1 Calibration of LLMs
|
| 135 |
+
|
| 136 |
+
Table 1 shows ECE and CECE in log binning scheme for GPT-2 models of different sizes. LLMs are in general well calibrated on language modeling. Besides, LLM calibration improves with scale. Larger LMs are better calibrated. This conclusion is consistent with calibration investigation evaluated in BIG-bench multiple-choice tasks in Srivastava et al. (2023) as well as in several tasks including language modelling in Zhu et al. (2023).
|
| 137 |
+
|
| 138 |
+
# 5.2 Main result: temperature-scaled surprisal improves human reading time prediction
|
| 139 |
+
|
| 140 |
+
We evaluate the predictive power of temperature-scaled surprisal. We scale $T$ in the range of [1, 10] and measure $\Delta_{\mathrm{llh}}$ , see Fig. 2. First, a confirmatory observation regarding the relationship between model size and predictive power: At $T = 1$ , GPT-2 small exhibits the best predictive performance, and as the model size increases, $\Delta_{\mathrm{llh}}$ declines, which is consistent with previous studies (Shain et al., 2022; Oh et al., 2022; Oh and Schuler, 2023b). Secondly, scaling the surprisal with $T > 1$ can significantly improve the predictive power across all corpora and LLMs. With optimal $T^*$ , on Dundee, Natural Stories, and Brown, the $\Delta_{\mathrm{llh}}$ improvement is 23-43%, 60-89%, and 14-24%, respectively. We assess statistical significance of GPT-2 small in Appendix H, where we report a result of $p < 0.001$ on three corpora. We also observe a consistent pattern: when increasing $T$ , $\Delta_{\mathrm{llh}}$ first rises then declines; the optimal value $T^*$ falls within the range of (2, 3) (around 2.5) across all models and corpora in our setting. At $T^*$ , even though the impact of model size on final performance is not fully recovered, the disparity diminishes. Smaller models continue to outperform, but the extent of model sizes influencing performance is reduced.
|
| 141 |
+
|
| 142 |
+
Finally, larger LMs typically have a larger human-likeness calibration error, shown in Table 1. Larger LMs also require a higher value of T to reach their best performance and have a greater increase by temperature-scaled surprisal.
|
| 143 |
+
|
| 144 |
+
# 5.3 Calibration error vs. RT prediction error
|
| 145 |
+
|
| 146 |
+
Table 2 shows ECE and CECE in both equally-spaced and log binning schemes when $T$ equals 1 and $T^{*}$ on three corpora. Probability distribution shaped by an optimal $T^{*}$ learnt for fit to human
|
| 147 |
+
|
| 148 |
+
<table><tr><td colspan="2"></td><td>T*</td><td>Δ1lh+</td><td>HCETS↓</td><td>ECElog↓</td><td>CECElog↓</td></tr><tr><td rowspan="4">Dundee</td><td>s</td><td>2.75</td><td>22.5</td><td>3.11</td><td>1.59</td><td>4.07E-03</td></tr><tr><td>m</td><td>3.0</td><td>42.0</td><td>3.61</td><td>1.74</td><td>4.13E-03</td></tr><tr><td>l</td><td>3.0</td><td>39.9</td><td>3.82</td><td>1.55</td><td>3.99E-03</td></tr><tr><td>xl</td><td>3.25</td><td>43.2</td><td>4.13</td><td>1.29</td><td>3.84E-03</td></tr><tr><td rowspan="4">NS</td><td>s</td><td>2.5</td><td>60.3</td><td>3.31</td><td>1.91</td><td>1.53E-02</td></tr><tr><td>m</td><td>2.5</td><td>63.0</td><td>3.50</td><td>1.80</td><td>1.50E-02</td></tr><tr><td>l</td><td>2.5</td><td>82.6</td><td>3.97</td><td>1.70</td><td>1.40E-02</td></tr><tr><td>xl</td><td>2.5</td><td>89.0</td><td>4.07</td><td>1.56</td><td>1.35E-02</td></tr><tr><td rowspan="4">Brown</td><td>s</td><td>2.5</td><td>13.7</td><td>3.10</td><td>1.69</td><td>1.53E-02</td></tr><tr><td>m</td><td>2.5</td><td>16.2</td><td>3.29</td><td>2.27</td><td>1.51E-02</td></tr><tr><td>l</td><td>2.75</td><td>21.8</td><td>4.18</td><td>1.58</td><td>1.44E-02</td></tr><tr><td>xl</td><td>2.75</td><td>24.4</td><td>4.29</td><td>1.56</td><td>1.38E-02</td></tr></table>
|
| 149 |
+
|
| 150 |
+
Table 1: Optimal $T^{*}$ , $\Delta_{\mathrm{lh}}$ improvement (%) ( $\Delta_{\mathrm{lh}} + = \left(\Delta_{\mathrm{lh}}(T = T^{*}) - \Delta_{\mathrm{lh}}(T = 1)\right) / \Delta_{\mathrm{lh}}(T = 1)$ ), and calibration errors (HCETS, % ECE and % CECE) for GPT2s on Dundee, Natural Stories (NS) and Brown. $\Delta_{\mathrm{lh}}$ values are multiplied by 1000. ECE and CECE are evaluated on log binning scheme.
|
| 151 |
+
|
| 152 |
+
RTs drastically hurts the model calibration regarding these two metrics. ECE and CECE with $T^{*}$ are more than 10 times worse than values with $T = 1$ . This discrepancy can be attributed to the different minima of deviations in LM human RT prediction and expected calibration error. The former is minimized towards words where LMs surprisal significantly deviates from human processing difficulty, while the latter is typically minimized with respect to the negative log-likelihood on a hold-out dataset (Guo et al., 2017; Rahimi et al., 2020a).
|
| 153 |
+
|
| 154 |
+
# 6 Linguistic analysis
|
| 155 |
+
|
| 156 |
+
Next we want to gain insight into what words benefit the most from temperature scaling. To this end, we analyze residuals from fitting LME regression models, identifying data points where scaling the temperature parameter notably enhances the fit of human RTs. Specifically, we quantify the improvement in fit by comparing the mean squared error (MSE) before and after adjusting the temperature
|
| 157 |
+
|
| 158 |
+
<table><tr><td></td><td></td><td>ECE↓</td><td>ECElog↓</td><td>CECE↓</td><td>CECElog↓</td></tr><tr><td rowspan="2">Dundee</td><td>T=1</td><td>1.43</td><td>1.59</td><td>4.05E-03</td><td>4.07E-03</td></tr><tr><td>T=T*</td><td>28.68</td><td>28.68</td><td>7.30E-03</td><td>9.88E-03</td></tr><tr><td rowspan="2">NS</td><td>T=1</td><td>2.48</td><td>1.91</td><td>1.83E-02</td><td>1.53E-02</td></tr><tr><td>T=T*</td><td>35.85</td><td>35.85</td><td>3.16E-02</td><td>3.97E-02</td></tr><tr><td rowspan="2">Brown</td><td>T=1</td><td>1.82</td><td>1.69</td><td>1.67E-02</td><td>1.53E-02</td></tr><tr><td>T=T*</td><td>33.16</td><td>33.16</td><td>2.75E-02</td><td>3.34E-02</td></tr></table>
|
| 159 |
+
|
| 160 |
+
Table 2: Expected calibration errors (% ECE and % CECE) for GPT-2 small on Dundee, Natural Stories (NS) and Brown. Results are all evaluated on the equally-spaced binning scheme and log binning scheme.
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
Figure 2: Relationship between $\Delta_{\mathrm{llh}}$ of GPT-2 models and corresponding temperature. T is scaled from 1.0 to 10.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
Figure 3: Relationship between $\Delta_{\mathrm{MSE}}$ and negative log actual-word probability (surprisal). We take the number of bins to 20. Black dashed lines denote $\Delta_{\mathrm{MSE}} = 0$ . Subsets containing less than $1\%$ of data are ignored for each corpus.
|
| 171 |
+
|
| 172 |
+
to its optimal value as follows:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\Delta_ {\mathrm {M S E}} (F) = \mathrm {M S E} _ {T = 1} \left(x _ {F}\right) - \mathrm {M S E} _ {T = T ^ {*}} \left(x _ {F}\right), \tag {8}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where $\mathrm{MSE}_{T = T^{\prime}}(x_{F})$ is the MSE calculated by all the data $x_{F}$ under the linguistic factor $F$ . The difference $\Delta_{\mathrm{MSE}}(F)$ thus quantifies the impact of scaling relative to the linguistic factor $F$ . A higher $\Delta_{\mathrm{MSE}}(F)$ signifies a greater influence of temperature-scaled surprisal of factor $F$ . To ensure sufficient data in each subset, we only consider subsets including more than $1\%$ of the data in each corpus.
|
| 179 |
+
|
| 180 |
+
# 6.1 Influence of low probability words
|
| 181 |
+
|
| 182 |
+
Given that temperature scaling enhances human likeness by shaping the probability distribution, it is natural to think about investigating whether there exists an inherent relationship between the distribution of probability and $\Delta_{\mathrm{MSE}}$ . Specifically, one might ask questions like if samples with low probability gain more from temperature scaling or the other way around. We find that high surprisal words benefit more from temperature scaling than low surprisal words, across all corpora, see Fig. 3.
|
| 183 |
+
|
| 184 |
+
# 6.2 Influence of word types
|
| 185 |
+
|
| 186 |
+
We investigate the effects of word-level properties, which include:
|
| 187 |
+
|
| 188 |
+
Named entities. Research has substantiated that named entities (NEs) require increased reading time for humans since during the processing of such words (Damasio et al., 2004; Wang et al., 2013). Oh and Schuler (2023b) showed that NEs are among the top two significant factors contributing to the discrepancies of large and small LMs across all corpus-by-LM combinations. Therefore, we were wondering whether the effect of temperature-scaling might be driven by NE. To test this, we automatically tagged NEs using a BERT base model (Devlin et al., 2019) fined-tuned for $\mathrm{NER}^3$ .
|
| 189 |
+
|
| 190 |
+
Part-of-speech tags. Similarly, previous research has argued that the poor fit of large LMs is primarily due to assigning too low surprisal estimates to open-class words like nouns and adjectives (Oh and Schuler, 2023b). We POS-tagged the corpora using the NLTK toolkit (Bird et al., 2009) with the default Penn Treebank Tag set. In the following, we mainly focus on the four classes of open-class tags, as well as a subset of the whole closed-class tags (CC).
|
| 191 |
+
|
| 192 |
+
<table><tr><td colspan="3"></td><td colspan="2">Named entities</td><td colspan="5">POS tags</td></tr><tr><td></td><td>GPT2</td><td>Avg.</td><td>NE</td><td>non-NE</td><td>NN</td><td>ADJ</td><td>VERB</td><td>ADV</td><td>CC</td></tr><tr><td rowspan="4">Dundee</td><td>s</td><td>26.3</td><td>87.0</td><td>23.4</td><td>33.8</td><td>100.5</td><td>-2.0</td><td>2.6</td><td>10.4</td></tr><tr><td>m</td><td>41.7</td><td>152.3</td><td>36.4</td><td>57.0</td><td>123.3</td><td>7.8</td><td>27.6</td><td>16.4</td></tr><tr><td>l</td><td>40.1</td><td>158.2</td><td>34.5</td><td>56.3</td><td>126.5</td><td>4.8</td><td>19.2</td><td>14.0</td></tr><tr><td>xl</td><td>41.4</td><td>168.2</td><td>35.4</td><td>60.0</td><td>125.5</td><td>6.9</td><td>19.7</td><td>13.5</td></tr><tr><td rowspan="4">NS</td><td>s</td><td>105.7</td><td>186.8</td><td>104.6</td><td>148.7</td><td>152.5</td><td>122.0</td><td>49.0</td><td>77.1</td></tr><tr><td>m</td><td>108.5</td><td>155.9</td><td>107.9</td><td>145.3</td><td>152.0</td><td>130.1</td><td>60.8</td><td>80.8</td></tr><tr><td>l</td><td>127.7</td><td>151.6</td><td>127.3</td><td>175.6</td><td>158.6</td><td>152.9</td><td>74.8</td><td>94.3</td></tr><tr><td>xl</td><td>123.3</td><td>141.8</td><td>123.1</td><td>163.6</td><td>145.4</td><td>161.2</td><td>81.5</td><td>89.0</td></tr><tr><td rowspan="4">Brown</td><td>s</td><td>37.2</td><td>266.0</td><td>28.1</td><td>54.3</td><td>-65.2</td><td>138.1</td><td>32.1</td><td>5.9</td></tr><tr><td>m</td><td>41.4</td><td>257.6</td><td>32.8</td><td>71.4</td><td>-60.6</td><td>137.5</td><td>38.6</td><td>3.5</td></tr><tr><td>l</td><td>42.6</td><td>265.3</td><td>51.1</td><td>69.9</td><td>-110.3</td><td>160.8</td><td>17.2</td><td>24.7</td></tr><tr><td>xl</td><td>54.8</td><td>282.3</td><td>45.8</td><td>90.5</td><td>-90.2</td><td>151.3</td><td>32.2</td><td>20.0</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 3: $\Delta_{\mathrm{MSE}}$ measurement on word-level properties of GPT-2 models on Dundee, Natural Stories (NS) and Brown. Top-3 on each corpus-by-LM are underlined.
|
| 195 |
+
|
| 196 |
+
Results. The result, as shown in Table 3, shows primary factors responsible for the benefit of using $s_T(w_t,T)$ for each corpus-by-LM combination. The top three influential subsets for each corpus are underlined. Among all datasets and models, named entities perform to be the most beneficial word-level attribute. In contrast, closed-class words profit the least from temperature scaling. Performance trends are consistent across different model variants on the same corpus.
|
| 197 |
+
|
| 198 |
+
We also measured empirically how often temperature scaling increased vs. decreased the surprisal estimate of a word. Our results show that for ca. $90\%$ of words, surprisal estimates are increased through temperature scaling across all word classes. For the subset of named entities, a slightly smaller percentage exhibits increased surprisal estimates. For a full analysis across different corpora and models, see Table 5 in Appendix B.
|
| 199 |
+
|
| 200 |
+
We further investigate the benefit of temperature-scaled surprisal (quantified by $\Delta_{\mathrm{MSE}}$ ) given the subset of words whose probability decreases (or increases). The results are in Table 4. On Dundee, the main gain arises from the reduction of large probabilities via temperature scaling. Conversely, for Natural Stories, the primary benefit comes more strongly from words with originally very low probability, which become more probable. For Brown, the effects are evenly split. These findings align with our theoretical intuition that temperature scaling enhances the fit performance by making probabilities more smooth, which means not only making high probabilities lower but also making very low probabilities higher and close to $1 / K$ , since a very low probability also means the model is confident in the incorrectness of certain classes.
|
| 201 |
+
|
| 202 |
+
Considering effects on named entities more specifically, we find that on Natural Stories and Brown, the benefit of temperature scaling can mostly be attributed to reducing the probability estimates of highly predictable entities, while on Dundee the beneficial effect mostly arises from increasing probabilities of named entities. We speculate that this could be due to the types of most frequent named entities that occur in the different text sorts, and present a more detailed analysis of this aspect in Appendix B.
|
| 203 |
+
|
| 204 |
+
# 6.3 Influence of multiple-token words
|
| 205 |
+
|
| 206 |
+
A fact that is often ignored (but see Nair and Resnik, 2023) is that modern LLMs use subword tokeniza-
|
| 207 |
+
|
| 208 |
+
<table><tr><td rowspan="3">Corpus</td><td rowspan="3">GPT2</td><td colspan="2">Avg.</td><td colspan="4">Named entities</td></tr><tr><td rowspan="2">pw_t↓</td><td rowspan="2">pw_t↑</td><td colspan="2">NE</td><td colspan="2">non-NE</td></tr><tr><td>pw_t↓</td><td>pw_t↑*</td><td>pw_t↓</td><td>pw_t↑</td></tr><tr><td rowspan="4">Dundee</td><td>s</td><td>27.4</td><td>18.2</td><td>81.3</td><td>107.2</td><td>25.1</td><td>10.1</td></tr><tr><td>m</td><td>41.9</td><td>39.8</td><td>139.1</td><td>205.6</td><td>37.8</td><td>23.9</td></tr><tr><td>l</td><td>41.0</td><td>31.3</td><td>156.1</td><td>166.6</td><td>36.2</td><td>18.0</td></tr><tr><td>xl</td><td>42.5</td><td>29.8</td><td>170.2</td><td>158.8</td><td>37.0</td><td>16.9</td></tr><tr><td rowspan="4">NS</td><td>s</td><td>94.5</td><td>275.6</td><td>218.5</td><td>3.0</td><td>92.9</td><td>284.9</td></tr><tr><td>m</td><td>105.7</td><td>158.3</td><td>179.3</td><td>-34.9</td><td>104.7</td><td>163.9</td></tr><tr><td>l</td><td>125.0</td><td>166.1</td><td>197.5</td><td>-224.8</td><td>124</td><td>175.4</td></tr><tr><td>xl</td><td>121.8</td><td>140.7</td><td>197.3</td><td>-272.6</td><td>120.8</td><td>149.5</td></tr><tr><td rowspan="4">Brown</td><td>s</td><td>37.6</td><td>32.6</td><td>329.7</td><td>-170.6</td><td>26.6</td><td>45.5</td></tr><tr><td>m</td><td>39.1</td><td>72.3</td><td>276.0</td><td>143.6</td><td>30.5</td><td>66.3</td></tr><tr><td>l</td><td>52.7</td><td>28.1</td><td>325.8</td><td>-205.9</td><td>42.5</td><td>44.4</td></tr><tr><td>xl</td><td>50.9</td><td>111.5</td><td>298.2</td><td>168.2</td><td>41.7</td><td>107.1</td></tr></table>
|
| 209 |
+
|
| 210 |
+
Table 4: Given words whose probability decreases (and increases), the corresponding $\Delta_{\mathrm{MSE}}(p_{w_t} \downarrow)$ (and $\Delta_{\mathrm{MSE}}(p_{w_t} \uparrow)$ ) measurement for GPT-2 models on Dundee, Natural Stories (NS) and Brown. A higher $\Delta_{\mathrm{MSE}}$ is displayed in bold in the average across all word types (Avg.), named entities (NE), and non-named entities (non-NE) columns, respectively, for each corpus-by-LM combination. The column with * indicates insufficient (less than $1\%$ ) data.
|
| 211 |
+
|
| 212 |
+
tion. This means that long words may consist of several tokens. In this case, the probability of the complete word is calculated by multiplying the probabilities of the subword tokens (and the word's surprisal is correspondingly calculated by adding the surprises of the subwords). While this may often not matter, whether a word is tokenized into a single subword or several subwords can make a remarkable difference when applying temperature scaling: imagine a long / difficult word which has a low probability (and correspondingly a high surprisal). If this word were to be represented as a single subword token, temperature scaling might have the effect that the probability of this word gets increased during temperature scaling, and its surprisal estimate is hence decreased at $T > 1$ .
|
| 213 |
+
|
| 214 |
+
If, on the other hand, the same word were to be composed of two subword tokens, one or both of the subword tokens can be expected to have a higher probability (than a hypothetical single subword token), and it is possible that during temperature scaling, the probabilities of the subword tokens would each be decreased at $T > 1$ , such that the sum of the surprises of the subword tokens would be much higher, compared to the word's surprisal estimate at $T = 1$ .
|
| 215 |
+
|
| 216 |
+
To summarize, whether the surprisal of a certain word would increase or decrease after temperature scaling could depend on whether that word happens to be included in the subword token vocabulary or
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
Figure 4: Relationship between $\Delta_{\mathrm{llh}}$ of GPT-2 s on three corpora and corresponding temperature T.
|
| 220 |
+
|
| 221 |
+
not. $^{4}$ In order to quantify to what extent subword tokenization affects surprisal estimates, we conducted several analyses.
|
| 222 |
+
|
| 223 |
+
Fig. 4 shows $\Delta_{\mathrm{llh}}$ under various conditions: scaling all words (consistent with experiments in Section 5.2) vs. taking into the analysis only the subset of single-token words and multiple-token words. The comparison between the full, dotted, and dashed lines highlights that the benefit of temperature-scaled surprisal comes primarily from the scaling of multiple-token words.
|
| 224 |
+
|
| 225 |
+
Next, it is interesting to consider for what percentage of multiple-token words temperature-scaling increases the surprisal. We find that the surprisal of more than $90\%$ of multiple-token words increases, and the ratio is higher than across single-token words by ca. $6\%$ on Dundee and Brown, see Table 12 in Appendix L for more details.
|
| 226 |
+
|
| 227 |
+
# 7 Discussion
|
| 228 |
+
|
| 229 |
+
Our experiments demonstrate that choosing a temperature around 2.5 improves the fit to human reading times. Furthermore, we find that this effect is chiefly driven by an improved fit for words which consist of several subword tokens. $^{5}$ Named entities and other open class words tend to have a larger tendency to contain several subword tokens, which can explain why temperature scaling is particularly effective for these words.
|
| 230 |
+
|
| 231 |
+
So what does all of this mean for surprisal estimates from LLMs and reading time prediction? Firstly, following the argumentation of Oh and Schuler (2023b), it is possible that indeed the effect is driven by humans failing to accurately estimate the probability of rare words (rare words being the ones that are split up into several subwords), because they do not reach sufficient language ex
|
| 232 |
+
|
| 233 |
+
experience or because human language models do not track these probabilities well. In this case, temperature-scaling rare words to which the LLM assigns a too high probability (and hence a low surprisal) would be a good strategy to counteract the discrepancy between humans and LLMs. From LLMs' perspective, recalling the observation from Section 5.3 that larger LLMs that yield poorer fits to RTs are actually better calibrated, hence the massive training dataset might be at the cause of driving these models away from the human-like predictive processing, aligning with Oh and Schuler (2023a).
|
| 234 |
+
|
| 235 |
+
Secondly, it is likely that the beneficial effect of temperature scaling is an artifact of subword tokenization, and that this effect would diminish if all words were composed of only a single subword token (cf. our explanation in Section 6.3). That is, temperature scaling would not be beneficial because of the reasons that motivated this research originally, but only because it is a way of assigning higher surprisal to words consisting of several subword tokens. In order to test this hypothesis, one would have to re-train a GPT-2 model using a vocabulary that at least includes all words that are contained in the reading time corpora, and then rerunning the analysis to check whether a beneficial effect of temperature scaling can still be found.
|
| 236 |
+
|
| 237 |
+
Finally, it is also possible that the splitting of a word into subwords coincides with the reader fixating a word several times, and that these added fixations lead to an overestimate in RTs compared to the actual surprisal experienced by a human reader. Future work could investigate this hypothesis by analysing RTs on subwords instead of aggregated words (with the caveat that subword tokens may not be cognitively plausible units).
|
| 238 |
+
|
| 239 |
+
# 8 Conclusion
|
| 240 |
+
|
| 241 |
+
This paper studies the prediction of human RTs from the perspective of probability distribution. We make the following contributions: (1) We demonstrate that the prediction of RTs can be significantly improved via temperature scaling of LLM probability estimates. (2) We demonstrate that the primary benefit of temperature-scaled surprisal is driven by words composed of several subword tokens. These words also tend to be rarer / long open-class words. Future work should investigate the interaction of subword tokenization and temperature scaling, as well as the issue of tokenization in the analysis of eye-tracking data.
|
| 242 |
+
|
| 243 |
+
# Limitations
|
| 244 |
+
|
| 245 |
+
In this work, the identification of the optimal T for temperature-scaled surprisal is manually tuned. Future research could develop an automated method to determine this optimal value, e.g., from specific characteristics of LLMs or corpora. Additionally, a question may be asked whether the possible nonlinear relationship between surprisal and reading times (Shain et al., 2022; Hoover et al., 2023) could influence the temperature-scaled surprisal's superiority over original surprisal. Investigating the effectiveness of temperature-scaled surprisal using generalized additive models, a branch of models that assume less about the linearity than linear mixed effect models employed here, would be an extension. Finally, exploring effects of temperature-scaled surprisal on different measures of fixation duration could be considered in future work.
|
| 246 |
+
|
| 247 |
+
# Ethical Considerations
|
| 248 |
+
|
| 249 |
+
The datasets and packages we used are all publicly available and have no privacy issues.
|
| 250 |
+
|
| 251 |
+
# Acknowledgements
|
| 252 |
+
|
| 253 |
+
The authors thank Xudong Hong and Dongqi Pu for useful discussions and comments.
|
| 254 |
+
|
| 255 |
+
# References
|
| 256 |
+
|
| 257 |
+
Doris Aaronson and Hollis S Scarborough. 1976. Performance theories for sentence coding: Some quantitative evidence. Journal of Experimental Psychology: Human perception and performance, 2(1):56.
|
| 258 |
+
Bernhard Angele, Elizabeth R Schotter, Timothy J Slattery, Tara L Tenenbaum, Clinton Bicknell, and Keith Rayner. 2015. Do successor effects in reading reflect lexical parafoveal processing? evidence from corpus-based and experimental eye movement data. Journal of Memory and Language, 79:76-96.
|
| 259 |
+
Christoph Aurnhammer and Stefan L Frank. 2019. Evaluating information-theoretic measures of word prediction in naturalistic sentence reading. Neuropsychologia, 134:107198.
|
| 260 |
+
Douglas Bates, Martin Mächler, Ben Bolker, and Steve Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67:1-48.
|
| 261 |
+
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. "O'Reilly Media, Inc."
|
| 262 |
+
|
| 263 |
+
Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2018. Language gans falling short. arXiv preprint arXiv:1811.02549.
|
| 264 |
+
Hanna Damasio, Daniel Tranel, Thomas Grabowski, Ralph Adolphs, and Antonio Damasio. 2004. Neural systems behind word and concept retrieval. Cognition, 92(1-2):179-229.
|
| 265 |
+
Vera Demberg and Frank Keller. 2008. Data from eyetracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193-210.
|
| 266 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 267 |
+
Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.
|
| 268 |
+
Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94-104, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 269 |
+
Kenneth I Forster, Christine Guerrera, and Lisa Elliot. 2009. The maze task: Measuring forced incremental sentence processing time. Behavior research methods, 41:163-171.
|
| 270 |
+
Victoria Fossum and Roger Levy. 2012. Sequential vs. hierarchical syntactic models of human incremental sentence processing. In Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2012), pages 61-69, Montreal, Canada. Association for Computational Linguistics.
|
| 271 |
+
Stefan L Frank and Rens Bod. 2011. Insensitivity of the human sentence-processing system to hierarchical structure. Psychological science, 22(6):829-834.
|
| 272 |
+
Richard Futrell, Edward Gibson, Harry J Tily, Idan Blank, Anastasia Vishnevetsky, Steven Piantadosi, and Evelina Fedorenko. 2018. The natural stories corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
|
| 273 |
+
Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 10-18, Salt Lake City, Utah. Association for Computational Linguistics.
|
| 274 |
+
|
| 275 |
+
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning, pages 1321-1330. PMLR.
|
| 276 |
+
John Hale. 2001. A probabilistic earley parser as a psycholinguistic model. In Second meeting of the north american chapter of the association for computational linguistics.
|
| 277 |
+
John Hale. 2003. The information conveyed by words in sentences. Journal of psycholinguistic research, 32:101-123.
|
| 278 |
+
John Hale. 2006. Uncertainty about the rest of the sentence. Cognitive science, 30(4):643-672.
|
| 279 |
+
Dan Hendrycks, Kimin Lee, and Mantas Mazeika. 2019. Using pre-training can improve model robustness and uncertainty. In International conference on machine learning, pages 2712-2721. PMLR.
|
| 280 |
+
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
|
| 281 |
+
Jacob Louis Hoover, Morgan Sonderegger, Steven T Piantadosi, and Timothy J O'Donnell. 2023. The plausibility of sampling as an algorithmic theory of sentence processing. Open Mind, 7:350-391.
|
| 282 |
+
Alan Kennedy, Robin Hill, and Joel Pynte. 2003. The dundee corpus. In Proceedings of the 12th European Conference on Eye Movement.
|
| 283 |
+
Meelis Kull, Miquel Perello Nieto, Markus Kangsepp, Telmo Silva Filho, Hao Song, and Peter Flach. 2019. Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration. Advances in neural information processing systems, 32.
|
| 284 |
+
Ananya Kumar, Percy S Liang, and Tengyu Ma. 2019. Verified uncertainty calibration. Advances in Neural Information Processing Systems, 32.
|
| 285 |
+
Sawan Kumar. 2022. Answer-level calibration for freeform multiple choice question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 665-679, Dublin, Ireland. Association for Computational Linguistics.
|
| 286 |
+
Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, and Kentaro Inui. 2022. Context limitations make neural language models more human-like. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10421-10436, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 287 |
+
Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126-1177.
|
| 288 |
+
|
| 289 |
+
Tal Linzen and T Florian Jaeger. 2014. Investigating the role of entropy in sentence processing. In Proceedings of the fifth workshop on cognitive modeling and computational linguistics, pages 10-18.
|
| 290 |
+
Danny Merkx and Stefan L. Frank. 2021. Human sentence processing: Recurrence or attention? In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 12-22, Online. Association for Computational Linguistics.
|
| 291 |
+
Don C Mitchell and David W Green. 1978. The effects of context and content on immediate processing in reading. The quarterly journal of experimental psychology, 30(4):609-636.
|
| 292 |
+
Sathvik Nair and Philip Resnik. 2023. Words, subwords, and morphemes: What really matters in the surprisal-reading time relationship? In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 11251–11260, Singapore. Association for Computational Linguistics.
|
| 293 |
+
Byung-Doh Oh, Christian Clark, and William Schuler. 2022. Comparison of structural parsers and neural language models as surprisal estimators. Frontiers in Artificial Intelligence, 5:777963.
|
| 294 |
+
Byung-Doh Oh and William Schuler. 2023a. Transformer-based language model surprisal predicts human reading times best with about two billion training tokens. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, Singapore, December 6-10, 2023, pages 1915-1921. Association for Computational Linguistics.
|
| 295 |
+
Byung-Doh Oh and William Schuler. 2023b. Why does surprisal from larger transformer-based language models provide a poorer fit to human reading times? Transactions of the Association for Computational Linguistics, 11:336-350.
|
| 296 |
+
Tiago Pimentel, Clara Meister, Ethan G. Wilcox, Roger Levy, and Ryan Cotterell. 2023. On the effect of anticipation on reading times. Transactions of the Association for Computational Linguistics.
|
| 297 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 298 |
+
Amir Rahimi, Kartik Gupta, Thalaiyasingam Ajthan, Thomas Mensink, Cristian Sminchisescu, and Richard Hartley. 2020a. Post-hoc calibration of neural networks. arXiv preprint arXiv:2006.12807, 2.
|
| 299 |
+
Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Richard Hartley, and Byron Boots. 2020b. Intra order-preserving functions for calibration of multiclass neural networks. Advances in Neural Information Processing Systems, 33:13456-13467.
|
| 300 |
+
Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. *Psychological bulletin*, 124(3):372.
|
| 301 |
+
|
| 302 |
+
David Reeb and Michael M Wolf. 2015. Tight bound on relative entropy by entropy difference. IEEE Transactions on Information Theory, 61(3):1458-1473.
|
| 303 |
+
Alfréd Rényi. 1961. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, volume 4, pages 547-562. University of California Press.
|
| 304 |
+
Cory Shain, Clara Meister, Tiago Pimentel, Ryan Coterell, and Roger Levy. 2022. Large-scale evidence for logarithmic effects of word predictability on reading time.
|
| 305 |
+
Claude Elwood Shannon. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423.
|
| 306 |
+
Chenglei Si, Chen Zhao, Sewon Min, and Jordan Boyd-Graber. 2022. Re-examining calibration: The case of question answering. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 2814–2829, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 307 |
+
Nathaniel J Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302-319.
|
| 308 |
+
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research.
|
| 309 |
+
Marten van Schijndel and Tal Linzen. 2019. Can entropy explain successor surprisal effects in reading? In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 1-7.
|
| 310 |
+
Lin Wang, Zude Zhu, Marcel Bastiaansen, Peter Hagoort, and Yufang Yang. 2013. Recognizing the emotional valence of names: An ERP study. *Brain and Language*, 125(1):118-127.
|
| 311 |
+
Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger P. Levy. 2020. On the predictive power of neural language models for human real-time comprehension behavior. In Proceedings of the 42nd Annual Meeting of the Cognitive Science Society, page 1707-1713.
|
| 312 |
+
Ethan Gotlieb Wilcox, Tiago Pimentel, Clara Meister, Ryan Cotterell, and Roger P Levy. 2023. Testing the predictions of surprisal theory in 11 languages. Transactions of the Association for Computational Linguistics.
|
| 313 |
+
Chiwei Zhu, Benfeng Xu, Quan Wang, Yongdong Zhang, and Zhendong Mao. 2023. On the calibration of large language models and alignment. In
|
| 314 |
+
|
| 315 |
+
Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9778-9795, Singapore. Association for Computational Linguistics.
|
| 316 |
+
|
| 317 |
+
# A Connection to Contextual Rényi Entropy
|
| 318 |
+
|
| 319 |
+
While a lot of work has investigated the effect of next word entropy on reading times (Hale, 2003, 2006; Linzen and Jaeger, 2014; Angele et al., 2015; van Schijndel and Linzen, 2019; Aurnhammer and Frank, 2019; Pimentel et al., 2023), we will here focus on contextual Rényi entropy (the entropy of the probability distribution at the current time stamp, which is parameterized by $\alpha$ ), as proposed in Pimentel et al. (2023) to represent human anticipatory reading process. Pimentel et al. (2023) find that Rényi entropy with an optimal $\alpha^{*}$ in the range of $(0,1)$ (around $1/2$ ) obtains the best performance in reading time prediction (compared to Shannon Entropy ( $\alpha = 1$ ) or compared to unscaled surprisal estimates).
|
| 320 |
+
|
| 321 |
+
Mathematically, Contextual Rényi entropy (Rényi, 1961) is defined as:
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\begin{array}{l} \mathrm {H} _ {\alpha} \left(w _ {t} \mid w _ {< t}\right) \\ = \lim _ {\beta \rightarrow \alpha} \frac {1}{1 - \beta} \log_ {2} \sum_ {w \in \overline {{\mathcal {W}}}} (p (w | \boldsymbol {w} _ {< t})) ^ {\beta}. \tag {9} \\ \end{array}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
For given $\alpha' \in (0, \infty)$ , we simply denote $\mathrm{H}_{\alpha}(w_t | \boldsymbol{w}_{<t})|_{\alpha = \alpha'}$ as $\mathrm{H}_{\alpha}|_{\alpha = \alpha'}$ .
|
| 328 |
+
|
| 329 |
+
Theorem 1 (Monotonicity of $s_T(w_t, T)$ and $\mathrm{H}_{\alpha}(w_t \mid \boldsymbol{w}_{<t})$ ). Given any probability distribution $\boldsymbol{p}$ with actual-word probability $p_{w_t} > 1 / K$ , where $K$ is the number of classes, temperature-scaled surprisal $s_T(w_t, T)$ is strictly monotonically increasing in $\Delta_T \in [1, \infty]$ , Rényi entropy $\mathrm{H}_{\alpha}(w_t \mid \boldsymbol{w}_{<t})$ is strictly monotonically decreasing in $\Delta_{\alpha} \in [0, 1]$ , especially,
|
| 330 |
+
|
| 331 |
+
$$
|
| 332 |
+
\begin{array}{l} s _ {T} | _ {T = 1} < s _ {T} | _ {T = T ^ {*}} < \lim _ {T \rightarrow \infty} s _ {T} \left(w _ {t}, T\right) (10) \\ \left. \mathrm {H} _ {\alpha} \right| _ {\alpha = 1} < \left. \mathrm {H} _ {\alpha} \right| _ {\alpha = 1 / 2} < \left. \mathrm {H} _ {\alpha} \right| _ {\alpha = 0}, (11) \\ \end{array}
|
| 333 |
+
$$
|
| 334 |
+
|
| 335 |
+
where $T^{*}$ is the optimal $T$ of fit to RTs in the range of $\Delta_{T}$ .
|
| 336 |
+
|
| 337 |
+
Proof. Eq. (10) can be easily verified by considering the monotonicity of temperature-scaled softmax output $\sigma_{SM}(\hat{z}_{w_t} / T)$ . The second part of Eq. (11) can be rewritten as:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\begin{array}{l} \left. \mathrm {H} _ {\alpha} \right| _ {\alpha = 1 / 2} = 2 \log_ {2} \sum_ {w \in \overline {{\mathcal {W}}}} \sqrt {p (w | \boldsymbol {w} _ {< t})} (12) \\ < 2 \log_ {2} \sqrt {K \sum_ {w \in \overline {{\mathcal {W}}}} p (w | \boldsymbol {w} _ {< t})} (13) \\ = - \log_ {2} (1 / K) = \mathrm {H} _ {\alpha} | _ {\alpha = 0}, (14) \\ \end{array}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
where for the step from Eq. (12) to Eq. (13) we use AM-QM inequality and $K$ is the number of classes in tokenizer. The first part of Eq. (11) can be rewritten as:
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
\begin{array}{l} \left. \mathrm {H} _ {\alpha} \right| _ {\alpha = 1 / 2} = 2 \log_ {2} \sum_ {w \in \overline {{\mathcal {W}}}} \sqrt {p (w | \boldsymbol {w} _ {< t})} (15) \\ > 2 \log_ {2} \sqrt {\prod_ {w \in \overline {{\mathcal {W}}}} \left(\frac {1}{p (w | \boldsymbol {w} _ {< t})}\right) ^ {p (w | \boldsymbol {w} _ {< t})}} (16) \\ = \sum_ {w \in \overline {{\mathcal {W}}}} p (w | \boldsymbol {w} _ {< t}) \log_ {2} p (w | \boldsymbol {w} _ {< t}) = \mathrm {H} _ {\alpha} | _ {\alpha = 1}, (17) \\ \end{array}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
where from Eq. (15) to Eq. (16) we use AM-GM inequality.
|
| 350 |
+
|
| 351 |
+
Theorem 2 Renyi entropy with $\alpha = 0$ is equivalent to temperature-scaled surprisal with $T\to \infty$
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\left. \mathrm {H} _ {\alpha} \left(w _ {t} \mid \boldsymbol {w} _ {< t}\right)\right| _ {\alpha = 0} = \lim _ {T \rightarrow \infty} s _ {T} \left(w _ {t}, T\right). \tag {18}
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
Proof. By plugging in $\alpha = 0$ , Contextual Rényi entropy recovers to be the entropy that readers concentrate on the count of potential words with nonzero probabilities, which is defined in Eq. (5) in Pimentel et al. (2023). As $T \to \infty$ , temperature-scaled surprisal converges to the surprisal induced by random guessing. Given the assumption that $p(w|\boldsymbol{w}_{<t}) > 0$ for each word $w \in \overline{\mathcal{W}}$ , LHS becomes:
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
L H S = - \log_ {2} (1 / K), \tag {19}
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
where $K$ is the number of classes. As $T\to \infty$ RHS becomes:
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\begin{array}{l} R H S = - \lim _ {T \rightarrow \infty} \log_ {2} \frac {e ^ {z _ {w t} / T}}{\sum_ {w \in \overline {{\mathcal {W}}}} e ^ {z _ {w} / T}} (20) \\ = - \log_ {2} (1 / K) (21) \\ \end{array}
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
Theorem 3 For $K \geq 2$ , the expectation of the $L1$ norm between Rényi entropy with $\alpha = 1$ and temperature-scaled surprisal with $T = 1$ has an upper bound.
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
\mathbb {E} \left[ \left| s _ {T} \right| _ {T = 1} - \mathrm {H} _ {\alpha} \mid_ {\alpha = 1} \right] < \sqrt {\frac {1}{4} \log^ {2} (K - 1) + 1} \tag {22}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
Proof. With Jensen's inequality, we have:
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
\begin{array}{l} \mathbb {E} \left[ \left| s _ {T} \right| _ {T = 1} - H _ {\alpha} \mid_ {\alpha = 1} \right] (23) \\ \leq \sqrt {\mathbb {E} \left[ \left(s _ {T} \mid_ {T = 1} - H _ {\alpha} \mid_ {\alpha = 1}\right) ^ {2} \right]} (24) \\ = \sqrt {\mathbb {E} [ (- \log_ {2} p _ {w _ {t}} - \sum_ {w \in \overline {{\mathcal {W}}}} p (w) (- \log_ {2} p (w))) ^ {2} ]} (25) \\ = \sqrt {\operatorname {V a r} [ s _ {T} | T = 1 ]} (26) \\ < \sqrt {\frac {1}{4} \log^ {2} (K - 1) + 1}, (27) \\ \end{array}
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
where $\operatorname{Var}[\cdot]$ denotes the variance. The last inequality is shown by Lemma 4, completing the proof of this theorem.
|
| 382 |
+
|
| 383 |
+
Lemma 4 (Maximum variance of the surprisal). (See Theorem 8 and Lemma 15 in (Reeb and Wolf, 2015)). Let $\rho = \mathrm{diag}(p_1,p_2,\dots,p_d)$ be a state on a $d$ -dimensional system. Let $-\log p_i$ be the surprisal of the output $i$ in this system. Define $N_{d}$ to be:
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
N _ {d} := \frac {1}{4} \log^ {2} (d - 1) + 1. \tag {28}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
For $d \geq 2$ , the variance of surprisal has a tight upper bound:
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\operatorname {v a r} _ {\rho} (- \log \rho) < N _ {d} \tag {29}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
Theorem 2 claims the equivalence of temperature-scaled surprisal $s_{T}(w_{t}, T)$ and Rényi entropy $\mathrm{H}_{\alpha}$ when $T \to \infty$ and $\alpha = 0$ . Theorem 3, on the other side, gives an upper bound when $T = 1$ and $\alpha = 1$ . Intuitively, when $T \in (1, \infty)$ , $s_{T}$ can be considered as a softened version of $s_{T}|_{T=1}$ . Similarly, when $\alpha \in (0, 1)$ , $\mathrm{H}_{\alpha}$ can be considered as a softened version of $\mathrm{H}_{\alpha}|_{\alpha=1}$ . Mathematically, Theorem 1 provides the monotonicity of both functions within their respective domains. Hypothetically, given the above conditions, when tuning both functions with the aim of a better fit to RTs, $s_{T}|_{T=T^{*}}$ and $\mathrm{H}_{\alpha}|_{\alpha=1/2}$ might be close. Empirically, Fig. 5 illustrates the relationship between averaged Rényi entropy $\overline{\mathrm{H}}_{\alpha}|_{\alpha=\{0,1/2,1\}}$ and $\overline{s}_{T}|_{T=\{1,T^{*},\infty\}}$ on probabilities on three corpora. Notably, $\overline{\mathrm{H}}_{\alpha}|_{\alpha=1/2}$ and $\overline{s}_{T}|_{T=T^{*}}$ are closely aligned, especially when compared with other entropy and surprisal data points. This empirical evidence partly verifies Theorem 2, Theorem 3 and our hypothesis.
|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
Figure 5: A comparison of averaged temperature-scaled surprisal $\overline{s}_T|_{T = \{1,T^*,\infty \}}$ and Rényi entropy $\overline{\mathrm{H}}_{\alpha}|_{\alpha = \{0,1 / 2,1\}}$ .
|
| 399 |
+
|
| 400 |
+
# B Further analysis in Section 6.2
|
| 401 |
+
|
| 402 |
+
We observe that larger LMs exhibit an increased $\Delta_{\mathrm{MSE}}$ by utilizing temperature-scaled surprisal, as shown in the average column (Avg.) of Table 3. Specifically, on Dundee, the top 2 models achieving the largest improvement through temperature scaling are GPT-2 medium and xl, while GPT-2 large and xl have the most benefit on Natural Stories and Brown. This result is consistent with previously observed $\Delta_{\mathrm{lhh}}$ improvement ( $\Delta_{\mathrm{lhh}}+$ ) across the corpus-by-LM reported in Table 1, suggesting a correlation between model likelihood and MSEs of the regression models. We do not observe a mismatch between them, as posited by Oh and Schuler (2023b) that LME models achieve similar MSEs irrespective of obvious differences in model likelihood.
|
| 403 |
+
|
| 404 |
+
Regarding the effect of the change (increase or decrease) of actual-word probability on the final fit to RTs, we first analyzed the ratio of probabilities decreasing (or increasing) for all words, as well as for subsets with specific word-level properties, choosing named entities as the representative, as shown in Table 5. We observed that probabilities of the majority of words (around $80 - 90\%$ ) decrease by temperature scaling. Compared with the average across all word types (as indicated in the 'Avg.' column), named entities exhibit a lower ratio of probability reduction. Larger LMs tend to have a higher ratio, especially the ratio for named entities, likely because smaller models may lack the specific knowledge of less common terms, such as named entities.
|
| 405 |
+
|
| 406 |
+
Recalling one of the results in Section 6.2 that the main advantage of temperature-scaled surprisal arises from reduction of large probabilities on Dundee and the amplification of small probabilities
|
| 407 |
+
|
| 408 |
+
on Natural Stories. However, for named entities, the story is converse on Dundee vs. on Natural Stories and Brown, where for the latter two corpora, the advantage is primarily due to reducing the probabilities of highly predictable entities. We shed light to the possible reason of such a discrepancy in Fig. 6, which displays the top 15 frequent words for GPT-2 small on three corpora. Notably, Natural Stories and Brown show a marked lack of words with increased probabilities (blue bins) compared to Dundee. This lack weakens the overall impact of rising probabilities (quantified by $\Delta_{\mathrm{MSE}}(p_{w_t}\uparrow)$ ). Specifically, on Brown, only 4 out of 15 top frequent words have the part of increased probabilities (blue bins), correlating with the largest discrepancy in $\Delta_{\mathrm{MSE}}$ between probabilities that decrease (329.7) and those that increase (-170.6) in Table 4.
|
| 409 |
+
|
| 410 |
+
<table><tr><td rowspan="2">Corpus</td><td rowspan="2">GPT2</td><td colspan="2">Avg.</td><td colspan="2">Named entities</td></tr><tr><td>pw_t ↓</td><td>|res| ↓</td><td>pw_t ↓</td><td>|res| ↓</td></tr><tr><td rowspan="4">Dundee</td><td>s</td><td>88.0</td><td>51.8</td><td>78.1</td><td>52.3</td></tr><tr><td>m</td><td>89.6</td><td>52.5</td><td>80.1</td><td>54.1</td></tr><tr><td>l</td><td>90.2</td><td>52.3</td><td>80.1</td><td>53.5</td></tr><tr><td>xl</td><td>91.4</td><td>52.4</td><td>82.7</td><td>54.3</td></tr><tr><td rowspan="4">Natural Stories</td><td>s</td><td>93.8</td><td>55.0</td><td>85.3</td><td>51.8</td></tr><tr><td>m</td><td>94.7</td><td>55.2</td><td>89.1</td><td>53.2</td></tr><tr><td>l</td><td>93.5</td><td>55.7</td><td>89.1</td><td>53.4</td></tr><tr><td>xl</td><td>92.1</td><td>55.5</td><td>88.2</td><td>52.8</td></tr><tr><td rowspan="4">Brown</td><td>s</td><td>91.8</td><td>51.5</td><td>87.3</td><td>50.9</td></tr><tr><td>m</td><td>93.2</td><td>51.5</td><td>86.1</td><td>50.9</td></tr><tr><td>l</td><td>93.3</td><td>51.8</td><td>88.6</td><td>52.1</td></tr><tr><td>xl</td><td>93.5</td><td>51.7</td><td>87.8</td><td>53.3</td></tr></table>
|
| 411 |
+
|
| 412 |
+
Table 5: The ratio of probability of predicted word $p_{w_t}$ getting smaller and the absolute value of residuals $|res|$ getting smaller for GPT-2 models on three corpora.
|
| 413 |
+
|
| 414 |
+
# C Preprocessing steps
|
| 415 |
+
|
| 416 |
+
On Dundee ET corpus (Kennedy et al., 2003), we use the first-pass gaze duration. Following prior work (Kuribayashi et al., 2022), we remove words containing numbers or punctuation, words that are either the first or the last one in a line, as well as words whose previous words contain numbers or punctuation. On Natural Stories SPR corpus (Futrell et al., 2018), following Shain et al. (2022), we remove words if the RT is less than $100\mathrm{ms}$ or greater than $3,000\mathrm{ms}$ , if the words are in the first or last position of each story, if participants answered less than 5 out of 8 comprehension questions correctly, if words contain numbers or punc
|
| 417 |
+
|
| 418 |
+
tuation, and if words whose previous words containing numbers or punctuation. On Brown SPR corpus (Smith and Levy, 2013), following Shain et al. (2022), we remove words if the RT is less than $100\mathrm{ms}$ or greater than $3,000\mathrm{ms}$ and if words contain numbers or punctuation.
|
| 419 |
+
|
| 420 |
+
# D Further descriptions on metrics and evaluation
|
| 421 |
+
|
| 422 |
+
We evaluate calibration error (\% ECE and % CECE) in both equally-spaced and log binning schemes. In equally-spaced binning scheme, the samples are grouped into $M \in \mathbb{N}$ equally-spaced interval bins based on their confidence $\hat{p}_i$ . Conversely, the log binning scheme operates under an empirical upper limit for $-\log_2 \hat{p}_i$ , denoted as $\max(-\log_2 \hat{p})$ . Table 6 shows ranges of $\hat{p}$ and $-\log_2 \hat{p}$ for GPT2s on three corpora. For this scheme, we establish $M \in \mathbb{N}$ log-equally-spaced interval bins within the range of $(0, \max(-\log_2 \hat{p})]$ .
|
| 423 |
+
|
| 424 |
+
We investigate scaling $T \in [1, 10]$ , considering both densely and sparsely distributed points. The values examined are detailed as follows: [1.0, 1.1, ..., 1.9] for dense intervals, [2.0, 2.25, ..., 3.25] for moderately spaced intervals, and [3.5, 4.0, ..., 10.0] for sparse intervals.
|
| 425 |
+
|
| 426 |
+
Following Kuribayashi et al. (2022), reading times of a base model are modelled by the following formula:
|
| 427 |
+
|
| 428 |
+
$$
|
| 429 |
+
\begin{array}{l} r t \sim \text {f r e q} * \text {l e n g t h} + \text {f r e q} _ {\text {p r e v}} 1 * \text {l e n g t h} _ {\text {p r e v}} 1 \\ + (1 | \text {a r t i c l e}) + (1 | \text {s u b j} _ {\text {i d}}) \end{array} \tag {30}
|
| 430 |
+
$$
|
| 431 |
+
|
| 432 |
+
A target model additionally includes surprisal estimates of current words and previous words:
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
\begin{array}{l} r t \sim \text {s u r p r i s a l} + \text {s u r p r i s a l} _ {\text {p r e v}} 1 + \text {s u r p r i s a l} _ {\text {p r e v}} 2 \\ \quad + \text {f r e q} * \text {l e n g t h} + \text {f r e q} _ {\text {p r e v}} 1 * \text {l e n g t h} _ {\text {p r e v}} 1 \\ \quad + (1 | \text {a r t i c l e}) + (1 | \text {s u b j} _ {\text {i d}}). \end{array} \tag {31}
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
On Dundee corpus, both models also include features of [screenN, lineN, segmentN]. We also perform experiments with both models without interactions among predictors in Appendix I.
|
| 439 |
+
|
| 440 |
+
# E Exploring further effectiveness of temperature-scaled surprisal over basic predictors
|
| 441 |
+
|
| 442 |
+
In this section, we explore the question of whether the benefit of temperature-scaled surprisal holds
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
|
| 448 |
+

|
| 449 |
+
Figure 6: Top 15 frequent named entities for GPT-2 small on Dundee, Natural Stories and Brown. $\uparrow$ and $\downarrow$ denote probability being higher and smaller, respectively. $\bigcirc$ and $\times$ denote unbeneficial words (absolute residual error increases) and beneficial words (absolute residual error decreases) by temperature scaling, respectively.
|
| 450 |
+
|
| 451 |
+
<table><tr><td></td><td>ˆp</td><td>- log2ˆp</td></tr><tr><td>Dundee</td><td>[4.99e-03, 1)</td><td>(0, 7.65]</td></tr><tr><td>Natural Stories</td><td>[8.567e-03, 1)</td><td>(0, 6.87]</td></tr><tr><td>Brown</td><td>[8.15e-03, 1)</td><td>(0, 6.94]</td></tr></table>
|
| 452 |
+
|
| 453 |
+
Table 6: Ranges of $\widehat{p}$ and $- {\log }_{2}\widehat{p}$ for GPT2s on Dundee, Natural Stories and Brown.
|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
Figure 7: Relationship between $\Delta_{\mathrm{llh}}$ of GPT-2 small and corresponding temperature. T is scaled from 1.0 to 10. Base predictor variables $\mathbf{v}^{base}$ and target predictor variables are 0 and temperature-scaled surprisal $s_T(w_t,T)$ , respectively.
|
| 457 |
+
|
| 458 |
+

|
| 459 |
+
|
| 460 |
+

|
| 461 |
+
|
| 462 |
+
only for regression models already containing other predictors such as length and frequency. We conduct experiments similar to those detailed in Section 5.2 while setting base predictor variables $\mathbf{v}^{base}$ to 0 and target predictor variables $\mathbf{v}^{tgt}$ to only temperature-scaled surprisal $s_T(w_t, T)$ in Eq. 7. Fig. 7 shows that excluding base predictors decrease but not totally impact the effectiveness of temperature-scaled surprisal.
|
| 463 |
+
|
| 464 |
+
# F Calibration error for single-token and multiple-token words
|
| 465 |
+
|
| 466 |
+
In Table 7, we demonstrate the calibration error (%ECE) for single-token and multiple-token words for GPT-2 small. Calibration evaluation is conducted at the token level as before. Results indicate that multiple-token words show larger calibration errors than single-token words.
|
| 467 |
+
|
| 468 |
+
# G Probability distribution before and after temperature scaling
|
| 469 |
+
|
| 470 |
+
Fig. 8 shows actual-word probability distribution before and after temperature scaling for GPT-2 small on three corpora. Multiple-token words tend to have smaller probabilities than single-token words, both before and after temperature scaling.
|
| 471 |
+
|
| 472 |
+
# H Significant test of temperature-scaled surprisal
|
| 473 |
+
|
| 474 |
+
We report the statistical significance based on selecting the most representative model, GPT2s, on three corpora in Table 8. Models with temperature-scaled surprisal lead to statistically significant positive $\Delta_{\mathrm{lh}}$ $(p < 0.001)$ .
|
| 475 |
+
|
| 476 |
+
# I Analysis on correlations among predictors
|
| 477 |
+
|
| 478 |
+
We investigate the question of whether the benefit of temperature-scaled surprisal is primarily due to the interactions and correlations among predictors. We first run experiments with the original target LME model as in Eq. 31 (denoted as model 1), a model that has no interactions between frequency and length as in Eq. 32 (denoted as model 2) and a third model that has no interactions and additionally includes random slopes for subject as in Eq. 33 (denoted as model 3).
|
| 479 |
+
|
| 480 |
+
$$
|
| 481 |
+
\begin{array}{l} r t \sim \text {s u r p r i s a l} + \text {s u r p r i s a l} _ {\text {p r e v}} 1 + \text {s u r p r i s a l} _ {\text {p r e v}} 2 \\ + \operatorname {f r e q} + \operatorname {l e n g t h} + \operatorname {f r e q} _ {-} \operatorname {p r e v} _ {-} 1 + \operatorname {l e n g t h} _ {-} \operatorname {p r e v} _ {-} 1 \\ + (1 \text {a r t i c l e}) + (1 \text {s u b j} _ {\text {i d}}). \tag {32} \\ \end{array}
|
| 482 |
+
$$
|
| 483 |
+
|
| 484 |
+
$$
|
| 485 |
+
\begin{array}{l} r t \sim \text {s u r p r i s a l} + \text {s u r p r i s a l} _ {\text {p r e v}} 1 + \text {s u r p r i s a l} _ {\text {p r e v}} 2 \\ + \text {f r e q} + \text {l e n g t h} + \text {f r e q} _ {-} \text {p r e v} _ {-} 1 + \text {l e n g t h} _ {-} \text {p r e v} _ {-} 1 \\ + (1 \mid \text {a r t i c l e}) + (\text {s u r p r i s a l} \mid \text {s u b j} _ {\text {i d}}). \tag {33} \\ \end{array}
|
| 486 |
+
$$
|
| 487 |
+
|
| 488 |
+
The results are in Table 9. Removing the interactions among predictors or additionally including random slopes does not influence the effectiveness of temperature-scaled surprisal.
|
| 489 |
+
|
| 490 |
+
Furthermore, we also investigated the correlations among predictors by examining the correlation matrix for GPT2 small on three corpora (model 1). Table 9, 10 and 11 indicate that temperature-scaled surprisal does not exhibit a stronger correlation with the other predictors in comparison to the original surprisal, as shown in the surprisal column ('surp'), which excludes the concern that the primary benefits are simply due to correlations between the baseline predictor and temperature-scaled surprisal.
|
| 491 |
+
|
| 492 |
+
<table><tr><td></td><td></td><td>ECEsingle</td><td>ECEmultiiple</td></tr><tr><td rowspan="2">Dundee</td><td>T=1</td><td>1.98</td><td>2.05</td></tr><tr><td>T=T*</td><td>25.58</td><td>36.10</td></tr><tr><td rowspan="2">Natural Stories</td><td>T=1</td><td>2.20</td><td>3.78</td></tr><tr><td>T=T*</td><td>32.38</td><td>47.02</td></tr><tr><td rowspan="2">Brown</td><td>T=1</td><td>1.69</td><td>3.86</td></tr><tr><td>T=T*</td><td>28.70</td><td>42.99</td></tr></table>
|
| 493 |
+
|
| 494 |
+
Table 7: Expected calibration errors of tokens in single-token (\% $\mathrm{ECE}_{\mathrm{single}}$ ) and multiple-token words (\% $\mathrm{ECE}_{\mathrm{multiple}}$ ) before and after temperature scaling for GPT-2 small on Dundee, Natural Stories and Brown. Results are all evaluated on the equally-spaced binning scheme.
|
| 495 |
+
|
| 496 |
+
<table><tr><td>Corpora</td><td>Models</td><td>p</td></tr><tr><td>Dundee</td><td>target vs. base</td><td><0.001</td></tr><tr><td>NS</td><td>target vs. base</td><td><0.001</td></tr><tr><td>Brown</td><td>target vs. base</td><td><0.001</td></tr></table>
|
| 497 |
+
|
| 498 |
+
Table 8: Significance of temperature-scaled surprisal for GPT2 small on three corpora with $T = {T}^{ * }$ .
|
| 499 |
+
|
| 500 |
+
<table><tr><td>Corpora</td><td>Models</td><td>T*</td><td>Δllh(T=1)</td><td>Δllh(T=T*)</td><td>Δllh+</td></tr><tr><td>Dundee</td><td>model1</td><td>2.75</td><td>6.90</td><td>8.45</td><td>22.5</td></tr><tr><td>Dundee</td><td>model2</td><td>2.75</td><td>6.79</td><td>8.12</td><td>19.6</td></tr><tr><td>Dundee</td><td>model3</td><td>2.75</td><td>7.81</td><td>9.12</td><td>16.8</td></tr><tr><td>Natural Stories</td><td>model1</td><td>2.5</td><td>4.36</td><td>6.99</td><td>60.3</td></tr><tr><td>Natural Stories</td><td>model2</td><td>2.5</td><td>4.35</td><td>6.99</td><td>60.7</td></tr><tr><td>Natural Stories</td><td>model3</td><td>*</td><td>*</td><td>*</td><td>*</td></tr><tr><td>Brown</td><td>model1</td><td>2.5</td><td>6.62</td><td>7.53</td><td>13.7</td></tr><tr><td>Brown</td><td>model2</td><td>2.25</td><td>6.62</td><td>7.30</td><td>10.3</td></tr><tr><td>Brown</td><td>model3</td><td>*</td><td>*</td><td>*</td><td>*</td></tr></table>
|
| 501 |
+
|
| 502 |
+
Table 9: Optimal $T^{*}$ , $\Delta_{\mathrm{lh}}(T = 1)$ , $\Delta_{\mathrm{lh}}(T = T^{*})$ , and $\Delta_{\mathrm{lh}} +$ for three models for GPT2 small on three corpora. * indicates regression models not converged.
|
| 503 |
+
|
| 504 |
+

|
| 505 |
+
|
| 506 |
+

|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
|
| 510 |
+

|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
Figure 8: Distribution of negative log actual-word probability (surprisal) before (left side of figure) and after (right side of figure) temperature scaling for single-token and multiple-token words for GPT-2 small on three corpora. Values of surprisal with probability of 0.1, 0.01 and $1 / \mathrm{K}$ (random guessing) are displayed using dash lines.
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
|
| 517 |
+
<table><tr><td></td><td>(Intr)</td><td>surp</td><td>surp_1</td><td>surp_2</td><td>log_frq</td><td>length</td><td>log_frq_1</td><td>length_1</td><td>log_frq_2</td></tr><tr><td>surp</td><td>0.004</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_1</td><td>0.000</td><td>-0.147</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_2</td><td>-0.001</td><td>-0.057</td><td>-0.101</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq</td><td>0.0200</td><td>0.238</td><td>0.002</td><td>-0.03</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>length</td><td>0.019</td><td>-0.272</td><td>0.027</td><td>0.04</td><td>0.602</td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq_1</td><td>0.022</td><td>-0.085</td><td>0.332</td><td>-0.048</td><td>0.034</td><td>-0.021</td><td></td><td></td><td></td></tr><tr><td>length_1</td><td>0.028</td><td>0.034</td><td>-0.200</td><td>0.031</td><td>0.003</td><td>-0.025</td><td>0.650</td><td></td><td></td></tr><tr><td>log_frq_2</td><td>0.032</td><td>-0.081</td><td>0.002</td><td>0.000</td><td>0.374</td><td>0.626</td><td>-0.009</td><td>0.014</td><td></td></tr><tr><td>length_2</td><td>0.038</td><td>-0.013</td><td>-0.033</td><td>0.003</td><td>-0.003</td><td>0.043</td><td>0.509</td><td>0.578</td><td>0.014</td></tr></table>
|
| 518 |
+
|
| 519 |
+
(a) $T = 1$
|
| 520 |
+
|
| 521 |
+
<table><tr><td></td><td>(Intr)</td><td>surp</td><td>surp_1</td><td>surp_2</td><td>log_frq</td><td>length</td><td>log_frq_1</td><td>length_1</td><td>log_frq_2</td></tr><tr><td>surp</td><td>0.006</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_1</td><td>0.005</td><td>-0.145</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_2</td><td>-0.003</td><td>-0.074</td><td>-0.154</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq</td><td>0.020</td><td>-0.055</td><td>0.050</td><td>-0.013</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>length</td><td>0.017</td><td>-0.395</td><td>0.042</td><td>0.044</td><td>0.676</td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq_1</td><td>0.024</td><td>-0.058</td><td>0.063</td><td>0.011</td><td>0.051</td><td>-0.018</td><td></td><td></td><td></td></tr><tr><td>length_1</td><td>0.025</td><td>0.060</td><td>-0.353</td><td>0.075</td><td>-0.016</td><td>-0.035</td><td>0.702</td><td></td><td></td></tr><tr><td>log_frq_2</td><td>0.031</td><td>-0.156</td><td>0.004</td><td>0.004</td><td>0.409</td><td>0.634</td><td>-0.005</td><td>0.011</td><td></td></tr><tr><td>length_2</td><td>0.037</td><td>0.001</td><td>-0.088</td><td>-0.006</td><td>-0.003</td><td>0.038</td><td>0.542</td><td>0.574</td><td>0.014</td></tr></table>
|
| 522 |
+
|
| 523 |
+
(b) $\overline{T} = T^{*}$
|
| 524 |
+
Figure 9: Correlation matrix for GPT2s on Dundee with (a) $T = 1$ and (b) $T = T^{*}$ .
|
| 525 |
+
|
| 526 |
+
<table><tr><td></td><td>(Intr)</td><td>surp</td><td>surp_1</td><td>surp_2</td><td>log_frq</td><td>length</td><td>log_frq_1</td><td>length_1</td><td>log_frq_2</td></tr><tr><td>surp</td><td>0.002</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_1</td><td>0.002</td><td>-0.009</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_2</td><td>0.001</td><td>0.003</td><td>-0.019</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq</td><td>0.017</td><td>0.237</td><td>0.013</td><td>-0.016</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>length</td><td>0.022</td><td>-0.181</td><td>0.018</td><td>0.011</td><td>0.692</td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq_1</td><td>0.018</td><td>0.019</td><td>0.238</td><td>-0.051</td><td>0.067</td><td>-0.015</td><td></td><td></td><td></td></tr><tr><td>length_1</td><td>0.022</td><td>0.013</td><td>-0.183</td><td>0.029</td><td>-0.01</td><td>0.011</td><td>0.672</td><td></td><td></td></tr><tr><td>log_frq_2</td><td>0.030</td><td>0.005</td><td>0.030</td><td>0.010</td><td>0.472</td><td>0.586</td><td>0.008</td><td>0.017</td><td></td></tr><tr><td>length_2</td><td>0.030</td><td>0.010</td><td>0.011</td><td>0.018</td><td>-0.005</td><td>0.02</td><td>0.468</td><td>0.589</td><td>0.023</td></tr></table>
|
| 527 |
+
|
| 528 |
+
(a) $T = 1$
|
| 529 |
+
|
| 530 |
+
<table><tr><td></td><td>(Intr)</td><td>surp</td><td>surp_1</td><td>surp_2</td><td>log_frq</td><td>length</td><td>log_frq_1</td><td>length_1</td><td>log_frq_2</td></tr><tr><td>surp</td><td>0.013</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_1</td><td>0.011</td><td>-0.108</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_2</td><td>-0.002</td><td>-0.034</td><td>-0.080</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq</td><td>0.020</td><td>0.200</td><td>0.009</td><td>-0.021</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>length</td><td>0.020</td><td>-0.194</td><td>0.014</td><td>0.010</td><td>0.700</td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq_1</td><td>0.019</td><td>-0.09</td><td>0.231</td><td>-0.026</td><td>0.048</td><td>0.001</td><td></td><td></td><td></td></tr><tr><td>length_1</td><td>0.020</td><td>0.016</td><td>-0.203</td><td>0.045</td><td>-0.013</td><td>0.014</td><td>0.667</td><td></td><td></td></tr><tr><td>log_frq_2</td><td>0.031</td><td>0.035</td><td>0.004</td><td>-0.007</td><td>0.482</td><td>0.578</td><td>0.000</td><td>0.020</td><td></td></tr><tr><td>length_2</td><td>0.031</td><td>0.015</td><td>0.038</td><td>-0.036</td><td>-0.003</td><td>0.019</td><td>0.474</td><td>0.579</td><td>0.023</td></tr></table>
|
| 531 |
+
|
| 532 |
+
(b) $T = T^{*}$
|
| 533 |
+
|
| 534 |
+
Figure 10: Correlation matrix for GPT2s on Natural Stories with (a) $T = 1$ and (b) $T = T^{*}$ .
|
| 535 |
+
|
| 536 |
+
<table><tr><td></td><td>(Intr)</td><td>surp</td><td>surp_1</td><td>surp_2</td><td>log_frq</td><td>length</td><td>log_frq_1</td><td>length_1</td><td>log_frq_2</td></tr><tr><td>surp</td><td>0.003</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_1</td><td>-0.003</td><td>-0.058</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_2</td><td>-0.001</td><td>-0.021</td><td>-0.039</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq</td><td>0.032</td><td>0.269</td><td>0.007</td><td>-0.053</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>length</td><td>0.036</td><td>-0.206</td><td>0.005</td><td>-0.007</td><td>0.691</td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq_1</td><td>0.007</td><td>-0.068</td><td>0.212</td><td>-0.044</td><td>0.084</td><td>0.021</td><td></td><td></td><td></td></tr><tr><td>length_1</td><td>0.012</td><td>-0.018</td><td>-0.379</td><td>0.036</td><td>0.022</td><td>0.060</td><td>0.484</td><td></td><td></td></tr><tr><td>log_frq_2</td><td>0.045</td><td>-0.003</td><td>0.000</td><td>-0.009</td><td>0.539</td><td>0.593</td><td>-0.013</td><td>0.016</td><td></td></tr><tr><td>length_2</td><td>0.028</td><td>-0.019</td><td>-0.09</td><td>0.018</td><td>0.020</td><td>0.054</td><td>0.247</td><td>0.347</td><td>-0.012</td></tr></table>
|
| 537 |
+
|
| 538 |
+
(a) $T = 1$
|
| 539 |
+
|
| 540 |
+
<table><tr><td></td><td>(Intr)</td><td>surp</td><td>surp_1</td><td>surp_2</td><td>log_frq</td><td>length</td><td>log_frq_1</td><td>length_1</td><td>log_frq_2</td></tr><tr><td>surp</td><td>0.019</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_1</td><td>-0.010</td><td>-0.114</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>surp_2</td><td>-0.002</td><td>-0.046</td><td>-0.096</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq</td><td>0.035</td><td>0.165</td><td>0.010</td><td>-0.049</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>length</td><td>0.032</td><td>-0.241</td><td>0.027</td><td>-0.010</td><td>0.719</td><td></td><td></td><td></td><td></td></tr><tr><td>log_frq_1</td><td>0.008</td><td>-0.103</td><td>-0.124</td><td>0.011</td><td>0.078</td><td>0.034</td><td></td><td></td><td></td></tr><tr><td>length_1</td><td>0.015</td><td>0.019</td><td>-0.572</td><td>0.079</td><td>0.018</td><td>0.043</td><td>0.580</td><td></td><td></td></tr><tr><td>log_frq_2</td><td>0.045</td><td>0.015</td><td>-0.024</td><td>-0.023</td><td>0.554</td><td>0.584</td><td>-0.012</td><td>0.026</td><td></td></tr><tr><td>length_2</td><td>0.029</td><td>0.009</td><td>-0.263</td><td>0.008</td><td>0.022</td><td>0.046</td><td>0.295</td><td>0.418</td><td>-0.005</td></tr></table>
|
| 541 |
+
|
| 542 |
+
(b) $T = T^{*}$
|
| 543 |
+
Figure 11: Correlation matrix for GPT2s on Brown with (a) $T = 1$ and (b) $T = T^{*}$ .
|
| 544 |
+
|
| 545 |
+
<table><tr><td></td><td colspan="2">GPT2 Δllh + (multiple)</td></tr><tr><td rowspan="4">Dundee</td><td>s</td><td>23.6</td></tr><tr><td>m</td><td>36.4</td></tr><tr><td>l</td><td>38.0</td></tr><tr><td>xl</td><td>42.9</td></tr><tr><td rowspan="4">NS</td><td>s</td><td>45.2</td></tr><tr><td>m</td><td>50.1</td></tr><tr><td>l</td><td>62.0</td></tr><tr><td>xl</td><td>67.8</td></tr><tr><td rowspan="4">Brown</td><td>s</td><td>9.2</td></tr><tr><td>m</td><td>13.4</td></tr><tr><td>l</td><td>17.9</td></tr><tr><td>xl</td><td>5.49</td></tr></table>
|
| 546 |
+
|
| 547 |
+
Table 10: $\Delta_{\mathrm{lh}}$ improvement by only scaling tokens in multiple-token words (\%) ( $\Delta_{\mathrm{lh}} + \mathrm{(multiple)} = (\Delta_{\mathrm{lh}}(T = T^{*},\mathrm{multiple}) - \Delta_{\mathrm{lh}}(T = 1)) / \Delta_{\mathrm{lh}}(T = 1)$ ) for GPT2s on Dundee, Natural Stories (NS) and Brown.
|
| 548 |
+
|
| 549 |
+
# J Influence of multiple-token words vs. model size
|
| 550 |
+
|
| 551 |
+
Table 10 shows the increase of $\Delta_{\mathrm{llh}}$ of temperature-scaled surprisal by only taking into the analysis the subset of multiple-token words. The benefit of temperature-scaled surprisal being primarily from the scaling of multiple-token words still holds for larger LLMs. For larger LLMs, the influence of multiple-token words is also larger.
|
| 552 |
+
|
| 553 |
+
# K Influence of word-level attributes vs. influence of multiple-token words
|
| 554 |
+
|
| 555 |
+
We explore which of these two factors has a stronger effect on the benefit of temperature-scaled surprisal, word-level attributes in Section 6.2 or multiple-token words in Section 6.3. For word types, we select named entities as the representative attribute since they perform to be the most beneficial ones as discussed in Section 6.2. For multiple-token words, we select all multiple-token words with more-than-one tokens. In order to fairly compare the influence, we normalize $\Delta_{\mathrm{MSE}}$ of each category under the linguistic factor $F$ with the ratio of that category words among the total words: $\bar{\Delta}_{\mathrm{MSE}}(F) = \Delta_{\mathrm{MSE}(F)}\cdot \mathrm{ratio}(F)$ . Table 11 shows that multiple-token words drive the much stronger averaged benefit of temperature-scaled surprisal, compared with the averaged benefit of named entities.
|
| 556 |
+
|
| 557 |
+
# L Other results in Section 6
|
| 558 |
+
|
| 559 |
+
<table><tr><td></td><td>GPT2</td><td>NE</td><td>#>1</td></tr><tr><td rowspan="4">Dundee</td><td>s</td><td>3.9</td><td>17.0</td></tr><tr><td>m</td><td>6.9</td><td>26.7</td></tr><tr><td>l</td><td>7.2</td><td>27.0</td></tr><tr><td>xl</td><td>7.6</td><td>27.9</td></tr><tr><td rowspan="4">NS</td><td>s</td><td>2.6</td><td>35.9</td></tr><tr><td>m</td><td>2.2</td><td>38.4</td></tr><tr><td>l</td><td>2.1</td><td>43.3</td></tr><tr><td>xl</td><td>2.0</td><td>40.6</td></tr><tr><td rowspan="4">Brown</td><td>s</td><td>10.2</td><td>27.0</td></tr><tr><td>m</td><td>9.8</td><td>28.9</td></tr><tr><td>l</td><td>10.1</td><td>30.7</td></tr><tr><td>xl</td><td>10.8</td><td>36.0</td></tr></table>
|
| 560 |
+
|
| 561 |
+
Table 11: $\bar{\Delta}_{\mathrm{MSE}}$ measurement on named entities (NE) and multiple-token words (#>1) for GPT-2 models on Dundee, Natural Stories (NS) and Brown.
|
| 562 |
+
|
| 563 |
+
<table><tr><td rowspan="2"></td><td colspan="4">ratio of wt↓</td><td colspan="4">ratio of named entities</td></tr><tr><td>#=1</td><td>#=1</td><td>#=2</td><td>#=3</td><td>#=1</td><td>#=1</td><td>#=2</td><td>#=3</td></tr><tr><td>Dundee</td><td>87.6</td><td>93.7</td><td>90.6</td><td>98.3</td><td>3.7</td><td>16.3</td><td>16.6</td><td>17.4</td></tr><tr><td>Natural Stories</td><td>92.1</td><td>93.0</td><td>92.2</td><td>97.2*</td><td>1.3</td><td>3.5</td><td>3.3</td><td>4.7*</td></tr><tr><td>Brown</td><td>93.0</td><td>98.1</td><td>97.6</td><td>35.2*</td><td>3.3</td><td>12.3</td><td>10.9</td><td>17.0*</td></tr></table>
|
| 564 |
+
|
| 565 |
+
Table 12: This table displays the ratio of words with decreasing probability $(p_{w_t\downarrow})$ and the ratio of named entities on subsets for both single-token words (#=1) and multiple-token words (#>1) for GPT-2 small on three corpora. Numbers marked with * indicate subsets with insufficient (less than $1\%$ ) data.
|
| 566 |
+
|
| 567 |
+
<table><tr><td></td><td colspan="2">#=1</td><td colspan="2">#>1</td><td colspan="2">#=2</td><td colspan="2">#=3</td></tr><tr><td></td><td>pwt↓</td><td>pwt↑</td><td>pwt↓</td><td>pwt↑</td><td>pwt↓</td><td>pwt↑</td><td>pwt↓</td><td>pwt↑</td></tr><tr><td>Dundee</td><td>8.0</td><td>19.6</td><td>269.5</td><td>-20.3*</td><td>50.5</td><td>26.6*</td><td>497.4</td><td>125.4**</td></tr><tr><td>NS</td><td>117.3</td><td>142.3</td><td>242.5</td><td>93.0*</td><td>312.6</td><td>95.8*</td><td>-123.9*</td><td>50.6**</td></tr><tr><td>Brown</td><td>35.2</td><td>-61.0</td><td>327.3</td><td>5290.2**</td><td>17.3</td><td>5290.2**</td><td>655.0*</td><td>0.0**</td></tr></table>
|
| 568 |
+
|
| 569 |
+
Table 13: Given words with decreasing (and increasing) probability, the corresponding $\Delta_{\mathrm{MSE}}(p_{w_t} \downarrow)$ (and $\Delta_{\mathrm{MSE}}(p_{w_t} \uparrow)$ ) measurement for both single-token words (\#=1) and multiple-token words (\#>1) for GPT-2 small on three corpora. Numbers marked with * indicate subsets with insufficient (less than 1%) data. Numbers marked with ** indicate subsets with super insufficient (around or less than 0.1%) data.
|
2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b0f52624039e77d1901481b2860fe2b4149113dc37fbaadfbf2244841d167d2d
|
| 3 |
+
size 1442797
|
2024/Temperature-scaling surprisal estimates improve fit to human reading times – but does it do so for the “right reasons”_/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/93d9876c-9d63-47e7-b619-7a2efb148ce1_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/93d9876c-9d63-47e7-b619-7a2efb148ce1_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/93d9876c-9d63-47e7-b619-7a2efb148ce1_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d99644f8c717d6a91646cbd6a5e531a06033c008232cf53426346d4127a17f5e
|
| 3 |
+
size 1187335
|
2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/full.md
ADDED
|
@@ -0,0 +1,506 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Temporal Knowledge Question Answering via Abstract Reasoning Induction
|
| 2 |
+
|
| 3 |
+
Ziyang Chen $^{1*}$ Dongfang Li $^{2*}$ Xiang Zhao $^{1\dagger}$ Baotian Hu $^{2\dagger}$ Min Zhang $^{2}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup> Laboratory for Big Data and Decision, National University of Defense Technology, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ Harbin Institute of Technology (Shenzhen), Shenzhen, China
|
| 8 |
+
|
| 9 |
+
{chenziyangnudt, xiangzhao}@nudt.edu.cn
|
| 10 |
+
|
| 11 |
+
{lidongfang, hubaotian, zhangmin2021}@hit.edu.cn
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
In this study, we address the challenge of enhancing temporal knowledge reasoning in Large Language Models (LLMs). LLMs often struggle with this task, leading to the generation of inaccurate or misleading responses. This issue mainly arises from their limited ability to handle evolving factual knowledge and complex temporal logic. To overcome these limitations, we propose Abstract Reasoning Induction (ARI) framework, which divides temporal reasoning into two distinct phases: Knowledge-agnostic and Knowledge-based. This framework offers factual knowledge support to LLMs while minimizing the incorporation of extraneous noisy data. Concurrently, informed by the principles of constructivism, ARI provides LLMs the capability to engage in proactive, self-directed learning from both correct and incorrect historical reasoning samples. By teaching LLMs to actively construct knowledge and methods, it can significantly boosting their temporal reasoning abilities. Our approach achieves significant improvements, with relative gains of $29.7\%$ and $9.27\%$ on two temporal QA datasets, underscoring its efficacy in advancing temporal reasoning in LLMs. The code can be found at https://github.com/czy1999/ARI-QA.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
"Knowledge is not simply transmitted from teacher to student, but actively constructed in the mind of the learner."
|
| 20 |
+
|
| 21 |
+
Jean Piaget
|
| 22 |
+
|
| 23 |
+
In practical scenarios, factual knowledge frequently undergoes evolution over time (Roddick and Spiliopoulou, 2002; Hoffart et al., 2011; Liang et al., 2023b, 2022). For instance, the host city of the Winter Olympic Games in 2018 was South
|
| 24 |
+
|
| 25 |
+
*Equal Contribution.
|
| 26 |
+
|
| 27 |
+
† Corresponding authors.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
Figure 1: LLMs, when integrated with various levels of information, exhibit varying scopes of applicability; the more abstract and refined the knowledge, the broader its potential application.
|
| 31 |
+
|
| 32 |
+
Korea, while in 2022 it was Beijing. Despite their proficiency in various linguistic tasks, LLMs often exhibit limitations in efficiently processing and understanding tasks that involve temporal information. (Huang and Chang, 2023; Zhao et al., 2023a; Liang et al., 2023a).
|
| 33 |
+
|
| 34 |
+
Specifically, when tasks require complex temporal reasoning, LLMs tend to mislead the process and provide an inaccurate result. For instance, "Which country's government leader visited China for the last time in 2015?", to answer this question, we need to (1) get which countries visited China in 2015; (2) filter out the country with the earliest visiting date. In step 1, LLMs easily meet hallucinations due to the incomplete training data and the uncertainty of parameterised knowledge. In step 2, LLMs may lead to the error because of the inaccuracy of the time filtering. Within such temporal reasoning tasks, any misjudgment in the temporal knowledge or errors during the temporal reasoning will culminate in erroneous conclusions. The problem might stem from the temporal unawareness of LLMs, impeding their ability to track and interpret events over time, particularly in situations requiring subtle and time-sensitive understanding.
|
| 35 |
+
|
| 36 |
+
Based on intuitive and empirical analysis, the cause accounting for the problem can be identi
|
| 37 |
+
|
| 38 |
+
fied from two aspects: lack of temporal knowledge and lack of complex temporal reasoning. And the definition is given as follows:
|
| 39 |
+
|
| 40 |
+
LACK OF TEMPORAL KNOWLEDGE. LLMs acquire vast knowledge through pre-training on extensive datasets. However, the fixed nature of their parameters after training solidifies their knowledge base, which leads to LLMs' failure in understanding unseen and evolving knowledge.
|
| 41 |
+
|
| 42 |
+
LACK OF COMPLEX TEMPORAL REASONING. Owing to the inherent nature of large models that generate outputs based on maximum probability, they are limited in directly conducting complex reasoning. Facing the interconnected multi-step temporal reasoning, LLMs might accumulate errors during the process of probabilistic generation.
|
| 43 |
+
|
| 44 |
+
Despite the neglect of essences, current studies relatively approach above challenges. To augment the LLMs' capacity for understanding unseen and evolving information, researchers incorporate external knowledge to supply contextually relevant information, known as Retrieval Augmented Generation (RAG) (Zhao et al., 2023b; Baek et al., 2023; Sun et al., 2023). Although these methods enhance the richness of LLMs' responses, the retrieval accuracy and input length limitations might result in irrelevant noises and incomplete reasoning clues, degrading overall performance (Wang et al., 2023; Lu et al., 2023). Furthermore, although tailored examples serve as prompts to guide LLMs (Dong et al., 2023; Min et al., 2022), they are often inadequate for diverse practical tasks and require substantial efforts in time and human to acquire high-quality examples. In conclusion, above approaches fail to provide necessary guidance for ongoing temporal reasoning processes and are susceptible to incorporating extraneous noise, as shown in Figure 2.
|
| 45 |
+
|
| 46 |
+
To overcome the limitation, it is crucial to recognize that LLMs are inherently limited by reliance on passively absorbing training instances. Constructivism (Savery and Duffy, 1995; Kirschner et al., 2006), deeply embedded in philosophical and psychological schools of thought, contends that knowledge and learning emerge not from mere exposure to external information but through active construction. It asserts that learners synthesize new knowledge by building upon their existing understanding and experiences (Lake and Baroni, 2023). In this view, learning is an active and ongoing process wherein individuals continuously modify and
|
| 47 |
+
|
| 48 |
+
refine their cognitive frameworks.
|
| 49 |
+
|
| 50 |
+
Inspired by the principles of constructivism, we try to steer LLMs towards an active and self-initiated learning approach, and propose an Abstract Reasoning Induction (ARI) framework. This will equip LLMs with the capacity for abstract synthesis and personalized knowledge application, enhancing relevance and utility in various contexts.
|
| 51 |
+
|
| 52 |
+
In details, to handle the lack of temporal knowledge, we transfer the data generation to an active process, consisting of two stages: Knowledge-agnostic and Knowledge-based. In knowledge-agnostic part, LLMs only need to choose potential steps. It is only in the knowledge-based part that the corresponding action is executed on the specific knowledge base to obtain the answer. This procedure offers factual knowledge support to LLMs while minimizing the incorporation of extraneous noisy data. On the other hand, to complete LLMs' complex temporal reasoning ability, ARI actively engages in proactive and self-directed learning from both correct and incorrect historical reasoning samples. This approach enables LLM to summarize and generalize methodologies (i.e. knowledge-agnostic step-by-step instructions) for different types of questions. When similar questions are encountered again, these abstract methods will guide the LLM to perform more efficient multi-step reasoning. By teaching LLMs to actively construct knowledge and methods, it can significantly boosting their temporal reasoning abilities without the need for further training.
|
| 53 |
+
|
| 54 |
+
In summary, our contribution is three-fold:
|
| 55 |
+
|
| 56 |
+
- Grounded in the principles of constructivism, we offer fresh perspectives for enhancing the reasoning capabilities and task adaptability of LLMs.
|
| 57 |
+
- We present ARI, a novel temporal reasoning framework that divides the process into two phases: Knowledge-agnostic and Knowledge-based. ARI enables LLMs to learn and construct proactively from historical reasoning samples, fostering a perpetual refinement of LLMs' reasoning abilities.
|
| 58 |
+
- The experimental results demonstrate that, compared to the leading TKGQA models, our approach achieves relative improvements of $29.7\%$ and $9.27\%$ respectively on two temporal QA datasets.
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
Figure 2: Three levels of information utilisation. Information-Driven Response, which extracts pertinent knowledge to form the basis of answers; Exemplar-Based Learning, offering cases of reasoning for the language model to assimilate and guide current inferences; and Abstract Reasoning Induction, providing step-wise abstract methodological guidance to the present question, distinct from concrete knowledge, thereby steering the language model's inference process.
|
| 62 |
+
|
| 63 |
+
# 2 Related Work
|
| 64 |
+
|
| 65 |
+
# 2.1 TKGQA Models
|
| 66 |
+
|
| 67 |
+
Traditional temporal knowledge graph question answering (TKGQA) methodologies fall into two categories. The first, exemplified by TEQUILA (Jia et al., 2018), deconstructs the initial question into sub-questions and temporal constraints, employing standard KGQA models for resolution, followed by a comparative analysis to select the most fitting answer. The second approach, such as CronKGQA (Saxena et al., 2021a), seeks to leverage TKG embeddings for semantic similarity assessments in answer determination, featuring a learnable reasoning process independent of handcrafted rules. Despite CronKGQA's proficiency with simpler inquiries, its performance falters with complex questions necessitating specific temporal inference. TempoQR (Mavromatis et al., 2021) addresses this by incorporating temporal scope data and employing the EaE method (Févry et al., 2020) to enrich question representation semantically.
|
| 68 |
+
|
| 69 |
+
However, traditional approaches rely on handcrafted rules or learnable representations, struggling with sophisticated temporal reasoning (Chen et al., 2022). In contrast, our model, leveraging the power of LLMs, excels in these challenging scenarios, showcasing superior adaptability and reasoning capabilities.
|
| 70 |
+
|
| 71 |
+
# 2.2 LLM Reasoning with External Information
|
| 72 |
+
|
| 73 |
+
Addressing hallucinations in generative models presents a compelling challenge, with one promising solution being the augmentation of LLMs with external knowledge (Mialon et al., 2023). Integration with an external knowledge base has become a prevalent strategy in question-answering and conversational tasks (Peng et al., 2023). There are mainly two approaches: explicit and implicit knowledge injection (Yang et al., 2023a). Explicit injection involves directly supplying LLMs with pertinent knowledge via prompts. For instance, KAPING (Baek et al., 2023) retrieves facts relevant to a query from a knowledge graph and appends these to the query as a prompt for the LLM, while CoK (Li et al., 2023a) first evaluates answer credibility and, if necessary, uses the LLM to decompose the question and generate various SPARQL queries to extract information from external knowledge bases. ChatKBQA (Luo et al., 2023) finetunes LLMs on KG structure to generate logical queries, which can be executed on KGs to obtain answers. Symbol-LLM (Xu et al., 2023) propose a dual-stage fine-tuning framework to integrates symbolic knowledge into LLMs, enhancing their reasoning capabilities. ToG (Sun et al., 2023) treats the LLM as an agent to interactively explore related entities and relations on KGs and perform reasoning based
|
| 74 |
+
|
| 75 |
+
on the retrieved knowledge. Implicit injection, on the other hand, subtly steers the LLM by incorporating knowledge semantic embeddings during reasoning or in the decoding process. KID (Liu et al., 2022) represents a novel decoding algorithm for generative LMs that dynamically infuses external knowledge at each step of LM decoding, and KPE (Zhao et al., 2023b) introduces a trainable parameter-sharing adapter to a parameter-freezing PLM for knowledge integration. Additionally, the incorporation of knowledge from other modalities such as visual information (Li et al., 2023c,b) has also become a method of knowledge introduction.
|
| 76 |
+
|
| 77 |
+
While the integration of knowledge into LLMs can mitigate issues of hallucinations, it is not without challenges. Explicit knowledge injection often struggles to acquire high-quality, relevant information, and is constrained by the finite-length contexts. Implicit injection typically necessitates fine-tuning of parameters, which can be prohibitively costly. We address these limitations by dividing temporal knowledge reasoning into two distinct components: knowledge-related and knowledge-agnostic. This approach achieves a clear separation between knowledge and reasoning, thereby circumventing the aforementioned constraints.
|
| 78 |
+
|
| 79 |
+
# 2.3 LLM Reasoning with Memories
|
| 80 |
+
|
| 81 |
+
Memory plays a pivotal role in human intelligence (Atkinson and Shiffrin, 1968). Given that LLMs inherently lack long-term memory and their short-term memory is constrained by the scope of their context window, numerous studies have embarked on the journey to equip LLMs with memory capabilities (Pan et al., 2023; Zhong et al., 2023). Instead of the conventional approach where accumulated conversations are retrieved directly, MemoChat (Lu et al., 2023) innovatively constructs and updates a structured, instant memo that categorizes past dialogues. Conversations are then fetched based on their specific topics and summaries. Reflexion (Shinn et al., 2023) exploits a working memory to store experiences for a dedicated task to improve the performance of the agent through several trials. However, the histories stored in working memory cannot benefit the episode for different task goals. MemPrompt (Madaan et al., 2022) designs a persistent memory to store human feedback to remind the chatbot of the conversational information and improve it continuously. RLEM (Zhang et al., 2023) adopts a persis
|
| 82 |
+
|
| 83 |
+
tent environment-grounded experience memory to store the experiences and assist in future decision-making even for different task goal. Thought Propagation (Yu et al., 2023) emphasizes the ability to explore and apply insights from analogous solutions. By delving into and utilizing solutions from problems related to the given issue, it improves performance and accuracy across various tasks.
|
| 84 |
+
|
| 85 |
+
However, current memory-enhanced methods are limited to passively received historical information, overlooking the active construction of abstract knowledge based on previous experience. Starting from constructivism, we apply the proposed method to provide large models with an active and continuous learning process, offering knowledge that is abstract and generalized.
|
| 86 |
+
|
| 87 |
+
# 3 Method
|
| 88 |
+
|
| 89 |
+
# Algorithm 1 Abstract Reasoning Induction
|
| 90 |
+
|
| 91 |
+
Require: Temporal knowledge graph $\mathcal{K}$ , question $q$ , historical memory $H_{q}$ , abstract methodology instruction set $M_{C}$
|
| 92 |
+
|
| 93 |
+
Ensure: Answer to the question $q$
|
| 94 |
+
|
| 95 |
+
1: $M_C\gets LLM(H_q)$ (4)
|
| 96 |
+
2: Initialize subject entity $e_h$ from $q$
|
| 97 |
+
3: Find 1-hop subgraph $G_{e_h}$ of $e_h$ in $\mathcal{K}$ (1)
|
| 98 |
+
4: Enumerate initial candidate actions $P_0$ from $G_{e_h}$ (2)
|
| 99 |
+
5: while $LLM(M_{C^*}, P_{t_i}') \neq answer(a)$ do
|
| 100 |
+
6: Filter candidate actions $P_{t_i}$ to get $P_{t_i}'$ (3)
|
| 101 |
+
7: $C_t^* \gets$ findKmeansCluster(q) (5)
|
| 102 |
+
8: $a_{i}^{*} = LLM(M_{C^{*}},q,P_{t_{i}}^{\prime})$ (6)
|
| 103 |
+
9: Execute selected action $a_{i}^{*}$ and update current environment
|
| 104 |
+
10: Regenerate candidate actions for the next step (2)
|
| 105 |
+
11: if $t_i \geq t_{max}$ then
|
| 106 |
+
12: Break
|
| 107 |
+
13: end if
|
| 108 |
+
14: end while
|
| 109 |
+
15: Execute final action $a_{i}^{*}$ to obtain the answer
|
| 110 |
+
16: Add current process to $H_{q}$
|
| 111 |
+
17: return Answer derived from the reasoning process
|
| 112 |
+
|
| 113 |
+
# 3.1 Task Definition
|
| 114 |
+
|
| 115 |
+
Given a Temporal Knowledge Graph (TKG) $\mathcal{K}$ and a natural language question $q$ , TKGQA aims to
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
Figure 3: Model architecture of ARI. Our framework divides temporal reasoning into two distinct phases: Knowledge-agnostic and Knowledge-based. This division aims to reduce instances of hallucinations and improve LLM's capacity for integrating abstract methodologies derived from historical experience. See the detailed instructions in Appendix A.4.
|
| 119 |
+
|
| 120 |
+
extract an entity $s / o \in \mathcal{E}$ or a timestamp $\tau \in \mathcal{T}$ that correctly answers the question $q$ . For instance, for the question 'Which country's government leader visited China for the last time in 2015?' Based on the event information contained in the TKG, we can get the answer to the question is the KG entity Vietnam.
|
| 121 |
+
|
| 122 |
+
# 3.2 ARI Framework
|
| 123 |
+
|
| 124 |
+
The constructivist perspective posits that knowledge does not merely encapsulate universal laws but must be contextually reconstructed for specific situations. This view emphasizes that understanding is a construct developed by the learner, uniquely shaped by their experiential background and dependent on their learning trajectory in a particular context (Kirschner et al., 2006; Savery and Duffy, 1995). In line with this philosophy, we introduce ARI framework. We overview the framework in Figure 3. Diverging from previous research that directly feeds knowledge into LLMs, we divide temporal knowledge reasoning into two parts: knowledge-agnostic and knowledge-based.
|
| 125 |
+
|
| 126 |
+
The knowledge-based module extracts relevant TKG subgraphs based on the given question, generating all feasible fine-grained actions through traversal (cf. § 3.3). This module encompasses all operations and interactions with knowledge, freeing the LLM from the vast, noisy specific information to focus on reasoning. This design approach allows us to build intricate knowledge queries by combining fine-grained atomic operations, while ef
|
| 127 |
+
|
| 128 |
+
fortlessly adapting to various knowledge bases (see the detailed operations in Appendix A.3). In the knowledge-agnostic module, the LLM performs high-level strategic decisions and candidate action selection. We have innovated mechanisms that allow LLMs to internalize lessons from past decisions, forming generalized abstract methodologies (solid line process in Figure 3). This foundation empowers the LLM to proactively develop abstract methodological guidelines for various question types, enhancing its ability to reason on new questions efficiently (cf. §3.4).
|
| 129 |
+
|
| 130 |
+
For the inference process, ARI first categorizes the question and selects the most suitable abstract methodological guidance. According to these guidelines, the LLM interacts multiple turns with the knowledge-agnostic module to reason and gradually solve the problem, as shown in the dashed line process in Figure 3. We also present a specific reasoning example in Figure 5 of Appendix A.4.
|
| 131 |
+
|
| 132 |
+
# 3.3 Knowledge-based Interaction
|
| 133 |
+
|
| 134 |
+
In the knowledge-based interaction part, we frame complex temporal knowledge reasoning challenges as multi-step inference tasks (Gu and Su, 2022; Gu et al., 2023). At the beginning of each step, we employ an filtering mechanism to engage with the TKG and the current question. This interaction produces a set of feasible candidate actions for each step. The LLM then selects the most suitable action from these candidates. Following this selection, the model interacts with the TKG, updating the initial
|
| 135 |
+
|
| 136 |
+
state for the next step in a recursive process.
|
| 137 |
+
|
| 138 |
+
Candidate Action Enumeration. Specifically, given a complex temporal question $q$ and a TKG $\mathcal{K} := (\mathcal{E}, \mathcal{R}, \mathcal{T}, \mathcal{F})$ , where $\mathcal{E}, \mathcal{R}, \mathcal{T}$ denote entities, relations, and timestamps, respectively. Starting from the subject entity $e_h$ of $q$ , we first find the 1-hop subgraph of $e_h$ in $\mathcal{K}$ . Let $N_{e_h}$ be the set of nodes in the 1-hop (undirected) neighborhood of $e_h$ in the TKG, and $R_{e_h}$ be the corresponding edge.
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
G _ {e _ {h}} = \left\{(e, r) | e \in N _ {e _ {h}}, r \in R _ {e _ {h}} \right\}, \tag {1}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where $G_{e_h}$ is the corresponding 1-hop subgraph of $e_h$ . For each edge $r \in G_{e_h}$ , our agent will strictly follow finely-grained atomic operations in Appendix A.3, traversing and replacing the relations and entities present in the current $G_{e_h}$ to construct the set of candidate actions $P_0$ , where the subscript 0 denotes the candidate actions for the initial step,
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
P _ {0} = \{E n u m (a c t i o n, e, r) | e, r \in G _ {e _ {h}} \}. \tag {2}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
Candidate Action Filtration. However, due to the continual occurrence and updating of temporal events, even the scale of a 1-hop subgraph can be vast. This results in an excessively large set of generated candidate actions, which can significantly impede the judgment of LLMs (cf. § section 4.2). Consequently, we propose a filtration process for candidate actions, retaining only those that are correct, feasible, and semantically relevant.
|
| 151 |
+
|
| 152 |
+
Specifically, for each action $a$ within set $P_0$ , we execute the corresponding function on the TKG. If the function returns a non-empty value, the action is considered correct and feasible; otherwise, it is discarded. Among all remaining actions, we retain the top- $K$ actions based on the calculation of their semantic similarity to the question $q$ ,
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
P _ {0} ^ {\prime} = \{a | \operatorname {e x e c} (a) \neq \emptyset \wedge a \in \operatorname {T o p} - K (P _ {0}, q) \}. \tag {3}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
Based on the LLM's decision, the agent executes the corresponding action, thereby updating the current environment. Subsequently, it regenerates the next set of candidates $P_{1}$ based on the newly identified subject entities, repeating this process until a termination command is received.
|
| 159 |
+
|
| 160 |
+
# 3.4 Knowledge-agnostic Reasoning
|
| 161 |
+
|
| 162 |
+
The knowledge-agnostic module enables LLMs to distill and apply abstract methodologies from historical reasoning examples, enabling adaptation to
|
| 163 |
+
|
| 164 |
+
diverse questions. This approach fosters a general methodology, applicable across various domains and to a wide range of knowledge-independent inquiries, enhancing LLMs' versatility.
|
| 165 |
+
|
| 166 |
+
Historical Memory Storage and Learning. In the LLM's reasoning process, we meticulously document the current state at each step $t_i$ , encompassing the current temporal question $q$ , the set of candidate actions $P_t$ , and the LLM's decision $a_t$ . The aggregate of all stepwise states for a given question forms the historical decision set $H_q$ ,
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
H _ {q} = \left\{\left(q, t _ {i}, P _ {t _ {i}}, a _ {t _ {i}}\right) \mid i \in T \right\}, \tag {4}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where $T$ is the set of all steps in the process.
|
| 173 |
+
|
| 174 |
+
Temporal reasoning is often multi-step and complex, yet the types of reasoning involved tend to be consistent, with similar questions requiring similar inference steps. Therefore, once the LLM conducts reasoning and accumulates a series of historical inferential steps, we employ unsupervised clustering K-means (MacQueen, 1967) to categorize these historical steps into distinct clusters $C_H$ .
|
| 175 |
+
|
| 176 |
+
After the LLM engages in reasoning and compiles historical reasoning steps, these are subjected to unsupervised clustering to form distinct clusters. Each cluster contains a mix of both accurate and erroneous reasoning processes. We enable the LLM to actively learn from specific historical instances within each cluster and distill abstract methodologies independent of domain-specific knowledge.
|
| 177 |
+
|
| 178 |
+
LLM Decision with Abstract Reasoning. When addressing new inference challenges, we initiate the process by identifying the historical reasoning cluster most closely aligned with the new question. We then extract its abstract methodologies to guide the LLM in its reasoning for the current question.
|
| 179 |
+
|
| 180 |
+
Specifically, for a given question $q$ , we calculate the similarity score $S(C_i, q)$ with each historical reasoning cluster $C_i$ . We then retrieve the abstract method instruction $M_{C^*}$ from the cluster that yields the maximum $S$ . Let $\{C_1, C_2, \ldots, C_n\}$ be the set of historical reasoning clusters. For a given question $q$ , we calculate the similarity score $S(C_i, q)$ for each cluster $C_i$ , and select the cluster $C^*$ with the highest similarity score to the query $q$ ,
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
C ^ {*} = \underset {C _ {i}} {\operatorname {a r g m a x}} S \left(C _ {i}, q\right). \tag {5}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
The abstract method instruction $M_{C^*}$ is then
|
| 187 |
+
|
| 188 |
+
selected from the cluster $C^*$ such that:
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
a _ {i} ^ {*} = L L M \left(M _ {C ^ {*}}, q, P _ {i}\right), \tag {6}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
where $M_{C^*}$ is the abstract method instruction of $C^*$ , and $a_i^*$ is the final output of LLM. The reasoning sequence concludes when the LLM outputs a termination action or when the length of reasoning steps exceeds the predetermined maximum threshold.
|
| 195 |
+
|
| 196 |
+
# 4 Experiment
|
| 197 |
+
|
| 198 |
+
# 4.1 Implementation Details
|
| 199 |
+
|
| 200 |
+
We use gpt-3.5-turbo-0613 as our LLM (More experiments and analyses of other LLMs can be found in Section A.7). We configure the LLM to access and investigate a corpus of 200 historical reasoning samples, with the maximum length of reasoning path set to 5, and the number of historical path categories fixed at 10. Due to the vast size of the test set, which comprises more than 50,000 question-answer pairs, we employ a stratified sampling approach for evaluation, extracting a subset of 200 questions from the test set for each iteration. In our evaluation, we compare several baseline methods, including the traditional TKGQA models and LLM-based models (see Appendix A.1 and A.2 for more details).
|
| 201 |
+
|
| 202 |
+
# 4.2 Overall Results
|
| 203 |
+
|
| 204 |
+
Table 1 presents the comparative results of ARI against other baselines on MULTITQ.
|
| 205 |
+
|
| 206 |
+
LLM only Performance. ChatGPT's performance on two datasets reveal a significant shortcoming in its application to temporal knowledge reasoning, even when all the knowledge required for the questions is within the scope of its training data prior to 2021. This deficiency is particularly pronounced when compared to traditional TKGQA methods, suggesting that the parameterised knowledge acquired by LLMs is not seamlessly transferred to temporal reasoning tasks. A stark contrast in performance is observed between the MULTITQ and CRONQUESTIONS datasets, the latter benefiting from ChatGPT's extensive training on WikiData(Vrandecic and Krötzsch, 2014). This discrepancy points to a significant challenge: MULTITQ's reliance on the ICEWS(Boschee et al., 2015), which encompasses more niche and frequent events, poses a difficulty for LLMs due to their limited exposure to such data during training.
|
| 207 |
+
|
| 208 |
+
This contrast elucidates the inherent limitations of LLMs in processing temporal knowledge.
|
| 209 |
+
|
| 210 |
+
Reasoning with External Knowledge. Incorporating additional knowledge graph data into LLMs, as seen with KG-RAG, considerably enhances their performance in knowledge-intensive QA tasks. KG-RAG outperforms ChatGPT by $81\%$ and $96\%$ across two datasets, respectively. Nevertheless, it still falls short of leading TKGQA models. This gap can be attributed to two main factors. First, the vast and complex nature of temporal information presents a challenge. A single question may involve thousands of related events, which cannot be accurately incorporated through prompts alone, leading to insufficient background information for reasoning. Second, the retrieved external knowledge often contains redundant or irrelevant information, which can further mislead the model's inference process.
|
| 211 |
+
|
| 212 |
+
Reasoning with Exemplar Guidance. The CoT KB approach, despite providing step-by-step reasoning guidance and knowledge-based interaction, does not reach the efficacy levels of ARI. This discrepancy stems from the exemplar-based learning method's dependence on specific instances, which can introduce extraneous knowledge and detract LLMs from focusing on reasoning methodologies, leading to inaccurate conclusions. Additionally, its static examples fail to provide the customized guidance necessary for diverse reasoning questions. Similarly, our investigation also extends to the augmented knowledge-agnostic module in ReAct, designed to mitigate the generation of infeasible actions. Despite this enhancement, a performance gap persists when compared to ARI, underscoring similar limitations identified in the CoT KB approach. Despite ReAct KB's interaction with the environment, it still lacks customized abstract guidance for the current reasoning question.
|
| 213 |
+
|
| 214 |
+
ARI significantly outperforms current state-of-the-art TKGQA models, achieving a relative improvement of $29.7\%$ on the MULTITQ dataset and a $9.27\%$ increase in performance on the CRONQUESTIONS dataset. These substantial gains can be attributed to the knowledge adaptability and the abstract methodology instruction mechanism, which empower LLMs to make advanced decisions. By leveraging abstract methodologies, LLMs can select optimal temporal reasoning steps without engaging with the specifics of the underlying knowledge.
|
| 215 |
+
|
| 216 |
+
<table><tr><td rowspan="3">Model</td><td colspan="5">MULTITQ</td><td colspan="5">CRONQUESTIONS</td></tr><tr><td rowspan="2">Overall</td><td colspan="2">Question Type</td><td colspan="2">Answer Type</td><td rowspan="2">Overall</td><td colspan="2">Question Type</td><td colspan="2">Answer Type</td></tr><tr><td>Simple</td><td>Complex</td><td>Entity</td><td>Time</td><td>Simple</td><td>Complex</td><td>Entity</td><td>Time</td></tr><tr><td>BERT</td><td>0.083</td><td>0.092</td><td>0.061</td><td>0.101</td><td>0.040</td><td>0.243</td><td>0.249</td><td>0.239</td><td>0.277</td><td>0.179</td></tr><tr><td>ALBERT</td><td>0.108</td><td>0.116</td><td>0.086</td><td>0.139</td><td>0.032</td><td>0.248</td><td>0.255</td><td>0.235</td><td>0.279</td><td>0.177</td></tr><tr><td>EmbedKGQA</td><td>0.206</td><td>0.235</td><td>0.134</td><td>0.290</td><td>0.001</td><td>0.288</td><td>0.290</td><td>0.286</td><td>0.411</td><td>0.057</td></tr><tr><td>CronKGQA</td><td>0.279</td><td>0.134</td><td>0.134</td><td>0.328</td><td>0.156</td><td>0.647</td><td>0.987</td><td>0.392</td><td>0.699</td><td>0.549</td></tr><tr><td>MultiQA</td><td>0.293</td><td>0.347</td><td>0.159</td><td>0.349</td><td>0.157</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>ChatGPT</td><td>0.102</td><td>0.147</td><td>0.077</td><td>0.137</td><td>0.002</td><td>0.249</td><td>0.250</td><td>0.247</td><td>0.246</td><td>0.253</td></tr><tr><td>KG-RAG</td><td>0.185</td><td>0.200</td><td>0.160</td><td>0.230</td><td>0.07</td><td>0.490</td><td>0.460</td><td>0.518</td><td>0.470</td><td>0.520</td></tr><tr><td>CoT KB</td><td>0.240</td><td>0.440</td><td>0.120</td><td>0.220</td><td>0.320</td><td>0.640</td><td>0.690</td><td>0.610</td><td>0.620</td><td>0.660</td></tr><tr><td>ReAct KB</td><td>0.310</td><td>0.635</td><td>0.136</td><td>0.313</td><td>0.300</td><td>0.685</td><td>0.835</td><td>0.525</td><td>0.650</td><td>0.755</td></tr><tr><td>ARI</td><td>0.380**</td><td>0.680**</td><td>0.210**</td><td>0.394**</td><td>0.344**</td><td>0.707**</td><td>0.860</td><td>0.570**</td><td>0.660*</td><td>0.800*</td></tr></table>
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
Figure 4: Comparison of average reasoning steps of ARI on MULTITQ.
|
| 220 |
+
|
| 221 |
+
Table 1: Performance of baselines and our methods on the MULTITQ and CRONQUESTIONS. $(p\leq 0.05)$ and $\ast (p\leq 0.005)$ indicate paired t-test of ARI versus the best baseline.
|
| 222 |
+
|
| 223 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Accuracy (%)</td></tr><tr><td>MULTITQ</td><td>CRONQUESTIONS</td></tr><tr><td>ARI</td><td>38.0</td><td>70.7</td></tr><tr><td>w/o Abstract Guidance</td><td>30.5</td><td>67.1</td></tr><tr><td>w/o History Cluster</td><td>34.5</td><td>68.9</td></tr><tr><td>w/o Action Filter</td><td>33.1</td><td>66.5</td></tr><tr><td>w/o Incorrect Examples</td><td>36.5</td><td>69.2</td></tr></table>
|
| 224 |
+
|
| 225 |
+
Table 2: Ablation results of ARI.
|
| 226 |
+
|
| 227 |
+
Comparison of Reasoning Efficiency. To validate the effectiveness of abstract instruction, we conduct an evaluation of reasoning efficiency. On the test set, with all other components of the model remaining constant, we remove the abstract instruction and record the average number of steps taken for reasoning. Compared with the ARI, we observe that under the guidance of abstract methodologies, LLMs not only improve in reasoning accuracy but also reduce their average number of reasoning steps by $11.4\%$ on MULTITQ and $9.3\%$ on CRONQUESTIONS. This underscores that the guidance provided by abstract methodologies can significantly enhance the efficiency of LLMs in temporal reasoning tasks.
|
| 228 |
+
|
| 229 |
+
# 4.3 Ablation Study
|
| 230 |
+
|
| 231 |
+
To evaluate the efficacy of the individual components of the model, we conducted ablation studies.
|
| 232 |
+
|
| 233 |
+
Initially, we remove the abstract guidance component, requiring the LLM to rely on its own understanding of the questions without the aid of historical information. This result in significant performance drops on both datasets, with a $19.7\%$ decrease on MULTITQ and a $3.7\%$ decrease on CRONQUESTIONS. This suggests that distilled abstract guidance plays a substantial role in supporting the model's reasoning capabilities.
|
| 234 |
+
|
| 235 |
+
To further assess the impact of abstract guidance, we eliminate the clustering module, thus deriving a universal abstract guidance from all historical reasoning processes without categorization based on question type. The model performance dropped by $9.2\%$ on MULTITQ and $2.5\%$ on CRONQUESTIONS, indicating that a singular abstract methodology is insufficient for guiding various types of questions and that targeted abstract methodological guidance is more effective. In Appendix A.5, we illustrate the impact of varying cluster quantity on the model's final reasoning performance.
|
| 236 |
+
|
| 237 |
+
To verify the role of incorrect samples in ARI, we remove incorrect examples from the process of generating abstract methods, providing only correct examples as guidance. As evident from the results, the removal of incorrect examples leads to a decrease in the quality of abstract guidance, resulting in a performance drop. By encountering and learning these incorrect examples, LLMs become more adept at avoiding similar pitfalls in subsequent reasoning tasks. This also aligns with our intuition and has been validated in prior studies (Wang and Li, 2023; Yang et al., 2023b; An et al., 2023).
|
| 238 |
+
|
| 239 |
+
Lastly, we remove the action selection module, allowing the LLM to choose from all generated actions without filtering. This led to a decrease in performance on both datasets by $12.8\%$ and $5.9\%$ , underscoring that unfiltered actions result in an excessive number of options, including irrelevant ones, which hinders the LLM's reasoning and complicates the decision-making process.
|
| 240 |
+
|
| 241 |
+
# 5 Conclusion and Limitation
|
| 242 |
+
|
| 243 |
+
This study, anchored in the principles of constructivism, critically examines the shortcomings of LLMs in addressing complex temporal reasoning challenges and proposes an innovative approach to augment their reasoning capabilities. Through the integration of a knowledge adaptability framework and abstract methodological guidance, we have shown that LLMs can attain more precise and efficient reasoning in complex temporal scenarios, effectively overcoming their constraints in processing and interpreting time-sensitive knowledge.
|
| 244 |
+
|
| 245 |
+
Limitations. While ARI demonstrates impressive results, it also presents several limitations for ongoing refinement. Firstly, The efficacy of generating abstract guidance heavily relies on the capabilities of LLMs. Smaller-scale LLMs may struggle to produce high-quality abstract guidance, thus potentially restricting their application. Secondly, the ARI framework depends on multi-step reasoning to arrive at final answers, a process moderately influenced by the LLM's reasoning efficiency, which extends the duration of inference. Finally, our method is primarily concentrated on complex temporal reasoning, with its effectiveness in other reasoning domains remaining to be examined. Future research should aim to refine these methods to make them more adaptable to various models and problem domains, enhance the balance between reasoning efficiency and depth, and expand their scope to include a broader range of reasoning tasks.
|
| 246 |
+
|
| 247 |
+
# Acknowledgement
|
| 248 |
+
|
| 249 |
+
The authors would like to thank the anonymous reviewers for their insightful and constructive comments, which greatly contributed to improving the quality of the paper. This work was partially supported by National Key R&D Program of China (No. 2022YFB3103600), NSFC (Nos. U23A20296, 62272469,62376067), The Science and Technology Innovation Program of Hunan Province (No. 2023RC1007), and Guangdong
|
| 250 |
+
|
| 251 |
+
Basic and Applied Basic Research Foundation (2023A1515110078).
|
| 252 |
+
|
| 253 |
+
# References
|
| 254 |
+
|
| 255 |
+
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, and Weizhu Chen. 2023. Learning from mistakes makes LLM better reasoner. CoRR, abs/2310.20689.
|
| 256 |
+
Richard C. Atkinson and Richard M. Shiffrin. 1968. Human memory: A proposed system and its control processes. In *The psychology of learning and motivation*.
|
| 257 |
+
Jinheon Baek, Alham Fikri Aji, and Amir Saffari. 2023. Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. CoRR, abs/2306.04136.
|
| 258 |
+
Elizabeth Boschee, Jennifer Lautenschlager, Sean O'Brien, Steve Shellman, James Starz, and Michael Ward. 2015. ICEWS Coded Event Data.
|
| 259 |
+
Ziyang Chen, Jinzhi Liao, and Xiang Zhao. 2023. Multi-granularity temporal question answering over knowledge graphs. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
|
| 260 |
+
Ziyang Chen, Xiang Zhao, Jinzhi Liao, Xinyi Li, and Evangelos Kanoulas. 2022. Temporal knowledge graph question answering via subgraph reasoning. Knowl. Based Syst., 251:109134.
|
| 261 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
|
| 262 |
+
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey for in-context learning. ArXiv, abs/2301.00234.
|
| 263 |
+
Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020. Entities as experts: Sparse memory access with entity supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4937-4951. Association for Computational Linguistics.
|
| 264 |
+
Yu Gu, Xiang Deng, and Yu Su. 2023. Don't generate, discriminate: A proposal for grounding language models to real-world environments. In Proceedings of the 61st Annual Meeting of the Association for
|
| 265 |
+
|
| 266 |
+
Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 4928-4949. Association for Computational Linguistics.
|
| 267 |
+
Yu Gu and Yu Su. 2022. Arcaneqa: Dynamic program induction and contextualized encoding for knowledge base question answering. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 1718-1731. International Committee on Computational Linguistics.
|
| 268 |
+
Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, Edwin Lewis-Kelham, Gerard de Melo, and Gerhard Weikum. 2011. YAGO2: exploring and querying world knowledge in time, space, context, and many languages. In Proceedings of the 20th International Conference on World Wide Web, WWW 2011, Hyderabad, India, March 28 - April 1, 2011 (Companion Volume), pages 229-232. ACM.
|
| 269 |
+
Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In *Findings of the Association for Computational Linguistics: ACL* 2023, Toronto, Canada, July 9-14, 2023, pages 1049-1065. Association for Computational Linguistics.
|
| 270 |
+
Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jannik Strötgen, and Gerhard Weikum. 2018. TEQUILA: temporal question answering over knowledge bases. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, pages 1807-1810. ACM.
|
| 271 |
+
Paul A. Kirschner, John Sweller, and Richard E. Clark. 2006. Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41:75-86.
|
| 272 |
+
Brenden M Lake and Marco Baroni. 2023. Human-like systematic generalization through a meta-learning neural network. Nature, pages 1-7.
|
| 273 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 274 |
+
Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Shafiq R. Joty, Soujanya Poria, and Lidong Bing. 2023a. Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources.
|
| 275 |
+
Yunxin Li, Baotian Hu, Xinyu Chen, Lin Ma, and Min Zhang. 2023b. Lmeye: An interactive perception network for large language models. arXiv preprint arXiv:2305.03701.
|
| 276 |
+
|
| 277 |
+
Yunxin Li, Baotian Hu, Wei Wang, Xiaochun Cao, and Min Zhang. 2023c. Towards vision enhancing llms: Empowering multimodal knowledge storage and sharing in llms. arXiv preprint arXiv:2311.15759.
|
| 278 |
+
Ke Liang, Yue Liu, Sihang Zhou, Wenxuan Tu, Yi Wen, Xihong Yang, Xiangjun Dong, and Xinwang Liu. 2023a. Knowledge graph contrastive learning based on relation-symmetrical structure. IEEE Transactions on Knowledge and Data Engineering, pages 1-12.
|
| 279 |
+
Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenxuan Tu, Siwei Wang, Sihang Zhou, and Xinwang Liu. 2023b. Learn from relational correlations and periodic events for temporal knowledge graph reasoning. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1559-1568.
|
| 280 |
+
Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenxuan Tu, Siwei Wang, Sihang Zhou, Xinwang Liu, and Fuchun Sun. 2022. Reasoning over different types of knowledge graphs: Static, temporal and multi-modal. arXiv preprint arXiv:2212.05767.
|
| 281 |
+
Ruibo Liu, Guoqing Zheng, Shashank Gupta, Radhika Gaonkar, Chongyang Gao, Soroush Vosoughi, Milad Shokouhi, and Ahmed Hassan Awadallah. 2022. Knowledge infused decoding. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. Open-Review.net.
|
| 282 |
+
Junru Lu, Siyu An, Mingbao Lin, Gabriele Pergola, Yu-lan He, Di Yin, Xing Sun, and Yunsheng Wu. 2023. Memochat: Tuning llms to use memos for consistent long-range open-domain conversation. CoRR, abs/2308.08239.
|
| 283 |
+
Haoran Luo, Haihong E, Zichen Tang, Shiyao Peng, Yikai Guo, Wentai Zhang, Chenghao Ma, Guanting Dong, Meina Song, and Wei Lin. 2023. Chatkbq: A generate-then-retrieve framework for knowledge base question answering with fine-tuned large language models. CoRR, abs/2310.08975.
|
| 284 |
+
J. MacQueen. 1967. Some methods for classification and analysis of multivariate observations.
|
| 285 |
+
Aman Madaan, Niket Tandon, Peter Clark, and Yiming Yang. 2022. Memory-assisted prompt editing to improve GPT-3 after deployment. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 2833-2861. Association for Computational Linguistics.
|
| 286 |
+
Costas Mavromatis, Prasanna Lakkur Subramanyam, Vassilis N. Ioannidis, Soji Adeshina, Phillip R. Howard, Tetiana Grinberg, Nagib Hakim, and George Karypis. 2021. Tempoqr: Temporal question reasoning over knowledge graphs. CoRR, abs/2112.05785.
|
| 287 |
+
|
| 288 |
+
Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey. CoRR, abs/2302.07842.
|
| 289 |
+
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? ArXiv, abs/2202.12837.
|
| 290 |
+
Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. 2023. Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies. CoRR, abs/2308.03188.
|
| 291 |
+
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. CoRR, abs/2302.12813.
|
| 292 |
+
John F. Roddick and Myra Spiliopoulou. 2002. A survey of temporal knowledge discovery paradigms and methods. IEEE Trans. Knowl. Data Eng., 14(4):750-767.
|
| 293 |
+
John R. Savery and Thomas M. Duffy. 1995. Problem based learning: An instructional model and its constructivist framework. Educational Technology archive, 35:31-38.
|
| 294 |
+
Apoory Saxena, Soumen Chakrabarti, and Partha P. Talukdar. 2021a. Question answering over temporal knowledge graphs. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6663-6676. Association for Computational Linguistics.
|
| 295 |
+
Apoory Saxena, Soumen Chakrabarti, and Partha P. Talukdar. 2021b. Question answering over temporal knowledge graphs. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6663-6676. Association for Computational Linguistics.
|
| 296 |
+
Apoory Saxena, Aditay Tripathi, and Partha P. Talukdar. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4498-4507. Association for Computational Linguistics.
|
| 297 |
+
|
| 298 |
+
Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dynamic memory and self-reflection. CoRR, abs/2303.11366.
|
| 299 |
+
Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Heung-Yeung Shum, and Jian Guo. 2023. Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. CoRR, abs/2307.07697.
|
| 300 |
+
Denny Vrandecic and Markus Krötzsch. 2014. Wiki-data: a free collaborative knowledgebase. Commun. ACM, 57(10):78-85.
|
| 301 |
+
Danqing Wang and Lei Li. 2023. Learning from mistakes via cooperative study assistant for large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 10667-10685. Association for Computational Linguistics.
|
| 302 |
+
Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, and Li Guo. 2023. Recursively summarizing enables long-term dialogue memory in large language models. CoRR, abs/2308.15022.
|
| 303 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS.
|
| 304 |
+
Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, and Jun Liu. 2023. Symbol-llm: Towards foundational symbol-centric interface for large language models. CoRR, abs/2311.09278.
|
| 305 |
+
Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, and Xindong Wu. 2023a. Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling. CoRR, abs/2306.11489.
|
| 306 |
+
Zeyuan Yang, Peng Li, and Yang Liu. 2023b. Failures pave the way: Enhancing large language models through tuning-free rule accumulation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 1751-1777. Association for Computational Linguistics.
|
| 307 |
+
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
|
| 308 |
+
Junchi Yu, Ran He, and Rex Ying. 2023. Thought propagation: An analogical approach to complex reasoning with large language models. CoRR, abs/2310.03965.
|
| 309 |
+
|
| 310 |
+
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, and Kai Yu. 2023. Large language model is semi-parametric reinforcement learning agent. arXiv preprint arXiv:2306.07929.
|
| 311 |
+
|
| 312 |
+
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023a. A survey of large language models. CoRR, abs/2303.18223.
|
| 313 |
+
|
| 314 |
+
Ziwang Zhao, Linmei Hu, Hanyu Zhao, Yingxia Shao, and Yequan Wang. 2023b. Knowledgeable parameter efficient tuning network for commonsense question answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 9051-9063. Association for Computational Linguistics.
|
| 315 |
+
|
| 316 |
+
Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, and Yanlin Wang. 2023. Memorybank: Enhancing large language models with long-term memory. CoRR, abs/2305.10250.
|
| 317 |
+
|
| 318 |
+
# A Appendix
|
| 319 |
+
|
| 320 |
+
# A.1 More Details about Baseline Methods
|
| 321 |
+
|
| 322 |
+
In our evaluation, we compared several baseline methods.
|
| 323 |
+
|
| 324 |
+
- Pre-trained LMs: To evaluate BERT (Devlin et al., 2019) and ALBERT (Lan et al., 2020), we generate their LM-based question embedding and concatenate it with the entity and time embeddings, followed by a learnable projection. The resulted embedding is scored against all entities and timestamps via dot-product.
|
| 325 |
+
- **EmbedKGQA** (Saxena et al., 2020) is designed with static KGs. To deal with multiple temporal granularities, timestamps are ignored during pre-training and random time embeddings are used.
|
| 326 |
+
- CronKGQA (Saxena et al., 2021a) is designed for single temporal granularity. To deal with multiple granularities, time embeddings at the year/month granularity are drawn at random from corresponding day embeddings.
|
| 327 |
+
MultiQA (Chen et al., 2023) is designed for multi-granularity temporal granularity with a transformer-based time aggregation module.
|
| 328 |
+
- ChatGPT*. We use ChatGPT to provide
|
| 329 |
+
|
| 330 |
+
<table><tr><td colspan="2"></td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td rowspan="3">Single</td><td>Equal</td><td>135,890</td><td>18,983</td><td>17,311</td></tr><tr><td>Before/After</td><td>75,340</td><td>11,655</td><td>11,073</td></tr><tr><td>First/Last</td><td>72,252</td><td>11,097</td><td>10,480</td></tr><tr><td rowspan="3">Multiple</td><td>Equal Multi</td><td>16,893</td><td>3,213</td><td>3,207</td></tr><tr><td>After First</td><td>43,305</td><td>6,499</td><td>6,266</td></tr><tr><td>Before Last</td><td>43,107</td><td>6,532</td><td>6,247</td></tr><tr><td colspan="2">Total</td><td>386,787</td><td>587,979</td><td>54,584</td></tr></table>
|
| 331 |
+
|
| 332 |
+
Table 3: Statistics of question categories in MULTITQ.
|
| 333 |
+
|
| 334 |
+
<table><tr><td colspan="2"></td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td rowspan="2">Simple</td><td>Simple Entity</td><td>90,651</td><td>7,745</td><td>7,812</td></tr><tr><td>Simple Time</td><td>61,471</td><td>5,197</td><td>5,046</td></tr><tr><td rowspan="3">Complex</td><td>Time Join</td><td>55,453</td><td>3,878</td><td>3,832</td></tr><tr><td>First/Last</td><td>118,556</td><td>11,198</td><td>11,159</td></tr><tr><td>Before/After</td><td>23,869</td><td>1,928</td><td>2,151</td></tr><tr><td colspan="2">Total</td><td>350,000</td><td>30,000</td><td>30,000</td></tr></table>
|
| 335 |
+
|
| 336 |
+
Table 4: Statistics of question categories in CRONQUESTIONS.
|
| 337 |
+
|
| 338 |
+
direct answers to the questions.
|
| 339 |
+
|
| 340 |
+
- KG-RAG. To validate the performance of the LLM in the presence of relevant background knowledge, we extracted relevant quaternions (up to 20) from the TKG based on the entity and time information appearing in the question, and put them in the prompt for ChatGPT to answer as a retrieval-enhanced way of comparison.
|
| 341 |
+
- ReAct KB: To address the applicability of ReAct (Yao et al., 2023) to our task, we designed a variant of ReAct by integrating our knowledge-agnostic module, which generates all feasible actions (using the same atomic action templates as ARI). LLMs were then prompted to select one action from the available options. Parameter settings remained consistent with our ARI approach, including the use of the same Named Entity Linking (NEL) method and reasoning length.
|
| 342 |
+
- CoT KB: We introduce the CoT KB method, which integrates a knowledge-based module into the Chain-of-Thought (CoT) (Wei et al., 2022) framework. This allows the LLM to interact with KG under the guidance of examples, thereby obtaining the final answer more effectively. We manually constructing 9 specific instance examples with detailed reasoning steps to guide the LLM, while keeping other settings unchanged.
|
| 343 |
+
|
| 344 |
+
# A.2 Datasets Statistics
|
| 345 |
+
|
| 346 |
+
CRONQUESTIONS (Saxena et al., 2021b) is a dataset for temporal knowledge graph question answering. The entities and times present in the questions are annotated. CRONQUESTIONS has four question types, including both simple and complex temporal questions.
|
| 347 |
+
|
| 348 |
+
MULTITQ (Chen et al., 2023) is a complex temporal question answering dataset with multi-granularity temporal information. Compared to existing datasets, MULTITQ features in a few advantages, including large scale, ample relations and multiple temporal granularity, which hence better reflects real-world scenarios.
|
| 349 |
+
|
| 350 |
+
We summarize the number of questions in MULTITQ across different types in Table 3 and Table 4. In Table 5, we present sample questions from MULTITQ as per question type, time granularity and answer type.
|
| 351 |
+
|
| 352 |
+
<table><tr><td>Property</td><td>Sample Question</td></tr><tr><td colspan="2">By question type</td></tr><tr><td>Equal</td><td>Which country provided humanitarian aid to Sudan in 2007?</td></tr><tr><td>Before/After</td><td>Who commended the Military of Mali before the Armed Rebel of Mali did?</td></tr><tr><td>First/Last</td><td>When did the Militant of Taliban first commend the Government of Pakistan?</td></tr><tr><td>Equal Multi</td><td>In 2012, who last did Barack Obama appeal for?</td></tr><tr><td>Before Last</td><td>Who was threatened by Benjamin Netanyahu last before Middle East?</td></tr><tr><td>After First</td><td>Who first wanted to negotiate with Evo Morales after the Citizen of Brazil did?</td></tr><tr><td colspan="2">By time granularity</td></tr><tr><td>Year</td><td>Who first made Abu Sayyaf suffer from conventional military forces In 2015?</td></tr><tr><td>Month</td><td>In Dec, 2008, who would wish to negotiate with the Senate of Romania?</td></tr><tr><td>Day</td><td>In Jul 21st, 2011, who criticized the Media of Ecuador?</td></tr><tr><td colspan="2">By answer type</td></tr><tr><td>Entity</td><td>Which country visited Japan in 2013?</td></tr><tr><td>Time</td><td>When did China express intent to meet with the Government of Pakistan?</td></tr></table>
|
| 353 |
+
|
| 354 |
+
Table 5: Representative examples from MULTITQ.
|
| 355 |
+
|
| 356 |
+
# A.3 Action Templates
|
| 357 |
+
|
| 358 |
+
Our approach is designed to be both generalizable and scalable. The templates in ARI are not highly manually defined high-level functions, but rather finely-grained atomic operations (e.g., extracting time, entity extraction, etc.). These atomic operations can be flexibly combined to generalize a wide range of complex actions, demonstrating their
|
| 359 |
+
|
| 360 |
+
versatility and extensibility.
|
| 361 |
+
|
| 362 |
+
Our action templates in ARI strictly follow the definition of functions in Table 6. We employ several specialized functions to facilitate precise information retrieval. The getTime function retrieves the timing of specific events, based on given entities and relation. For temporal positioning, getBefore, getAfter, and getBetween identify entities or events relative to specified time frames. In terms of entity queries,TAILEntity and getHeadEntity ascertain linked entities based on existing relation, with an optional time constraint. For queries targeting specific time instances, getFirst and getLast pinpoint entities with the earliest and latest occurrences, respectively. Responses are then articulated using the answer function, providing a streamlined method for answering queries within the TKG.
|
| 363 |
+
|
| 364 |
+
Our method exhibits low coupling with templates, making it adaptable to new data and domains. The extensibility of atomic templates is straightforward, allowing for easy incorporation of additional templates as needed. For instance, if we were to extend our approach to handle spatiotemporal data questions, adding a spatial atomic operation would be a straightforward task without the need for significant modifications.
|
| 365 |
+
|
| 366 |
+
# A.4 Details about the Instruction Format
|
| 367 |
+
|
| 368 |
+
In Figure 5, we illustrate an example of reasoning using the ARI model. Table 8 shows some exemplars of ARI. During each step of the process, the LLM receives guidance from abstract methods and selects the optimal action from available paths, continuing until it deems an answer has been sufficiently formulated or the maximum reasoning length is reached. Figure 8 presents the complete set of instructions used in our experiments, comprising components such as task definition, functional interpretations of potential actions, the current temporal question under consideration, historical reasoning steps, available candidate actions for the current round, feedback from the previous round's action, and requirements for output formatting.
|
| 369 |
+
|
| 370 |
+
# A.5 Impact of Cluster Quantity
|
| 371 |
+
|
| 372 |
+
In Figure 7, we show the reduced dimensional clustering diagram for the 10 categories of questions in the experiment. To verify the effect of different number of clusters on the results, we present the impact of the number of historical reasoning process clusters on the results. As shown in Figure 6. We
|
| 373 |
+
|
| 374 |
+

|
| 375 |
+
Figure 5: A demonstration sample of ARI reasoning on MULTITQ.
|
| 376 |
+
|
| 377 |
+
<table><tr><td>Action template</td><td>Comments</td></tr><tr><td>getTailEntity(head,rel,time)</td><td>Identify the tail/object entity based on the head/subject entity and relation</td></tr><tr><td>getHeadEntity(tail,rel,time)</td><td>Identify the head/sujct entity based on the tail/object entity and relation</td></tr><tr><td>getTime(head,rel,tail)</td><td>Retrieve the time of a specific event based on the head entity, relation and tail entity</td></tr><tr><td>getBetween entities(Time1,Time2</td><td>Identify entities/events that occurred between two specific times</td></tr><tr><td>getBefore entities(time)</td><td>Identify entities/events that occurred before a given time</td></tr><tr><td>getAfte entities,time)r</td><td>Identify entities/events that occurred after a given time</td></tr><tr><td>getFirst entities,time)</td><td>Pinpoint entities with the earliest occurrence</td></tr><tr><td>getLast entities,time)</td><td>Pinpoint entities with the latest occurrence</td></tr><tr><td>answer entities/time)</td><td>To provide your answer, use the answer function</td></tr></table>
|
| 378 |
+
|
| 379 |
+
Table 6: Action templates in ARI. We employ these specialized functions to facilitate precise information retrieval.
|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
Figure 6: Accuracy v.s Number of Clusters of ARI on MULTITQ.
|
| 385 |
+
|
| 386 |
+

|
| 387 |
+
Figure 7: Clustering results for historical inference questions.
|
| 388 |
+
|
| 389 |
+
observe an initial increase followed by a decline in performance for both simple and complex problems. This pattern can be attributed to the fact that when the number of clusters is too low, the LLM is unable to distill concise and effective abstract methods from the noisy and abundant historical paths. Conversely, when the number of clusters is too high relative to a fixed number of historical
|
| 390 |
+
|
| 391 |
+
samples, each category contains too few samples to provide the LLM with sufficient information to refine abstract methods. Thus, we observe a trend of improvement that eventually reverses as the number of clusters increases.
|
| 392 |
+
|
| 393 |
+
# A.6 Error Analysis
|
| 394 |
+
|
| 395 |
+
For error analysis, we randomly sample 100 error instances from the test set and summarized the following three types of typical errors: (1) Retrieving irrelevant entities (in MULTITQ), meaning the model obtained wrong entities from the KG; Although our entity linking model can achieve a high prediction accuracy, wrong entities still exist in some questions. (2) Low-quality abstract methodological guidance. Within the dataset, there exist complex problems for which the historical reasoning processes consistently led to incorrect conclusions. This lack of sufficient correct reasoning histories hampers the LLM's ability to synthesize and refine effective abstract methodologies. Consequently, the low-quality abstract methods derived by the LLM prove inadequate in guiding subsequent reasoning processes, leading to a cascade of errors. (3) Uncertainty outputs of LLMs. Despite the constraint that LLMs can only select from candidate actions or provide final answers, there are instances where they do not strictly adhere to the given instruction. This non-compliance leads to the failure of our predefined graph query methods, consequently impeding effective reasoning.
|
| 396 |
+
|
| 397 |
+
This demonstrates that more efforts are needed to strengthen the model's reasoning capabilities, particularly in enhancing the reasoning capabilities of LLMs and diversifying their reasoning processes. It is crucial to provide a richer array of effective historical information for the generation of abstract methods. This approach is vital to prevent the LLMs from falling into a repetitive cycle of errors.
|
| 398 |
+
|
| 399 |
+
# A.7 Generalizability of Method on Other LLMs
|
| 400 |
+
|
| 401 |
+
<table><tr><td>Model</td><td>LLM only</td><td>ARI</td></tr><tr><td>Llama-2 Chat 7B</td><td>0.040</td><td>0.105</td></tr><tr><td>GPT-4</td><td>0.125</td><td>0.411</td></tr></table>
|
| 402 |
+
|
| 403 |
+
Table 7: Accuracy of ARI with Other LLMs on MULTITQ
|
| 404 |
+
|
| 405 |
+
To assess the effectiveness of the ARI across various LLMs, we conducted experiments using the open-source model Llama-2-7B-chat and GPT-4. Our findings indicate that models with greater inherent capabilities yield better direct inference outcomes, presumably due to the acquisition of more extensive knowledge during training. Furthermore, the performance enhancements
|
| 406 |
+
|
| 407 |
+
in ARI models built upon these base models are more pronounced. Notably, the ARI model based on GPT-4 achieved a score of 0.411 on the MultiTQ benchmark, representing a relative improvement of $40.2\%$ over the current state-of-the-art models. This significant advancement underscores our model's robust generalization capabilities.
|
| 408 |
+
|
| 409 |
+
Our work focuses on enhancing LLM reasoning with abstract guidance, minimizing irrelevant noise by isolating them from direct knowledge sources. Unlike mainstream LLM agents designed for interactive environments, we employ a knowledge-agnostic module for environmental interactions, allowing the LLM to make decisions based on abstract guidance alone. Additionally, the study is open to incorporating interactive decision-making elements from recent LLM advancements, aiming to enhance the project's effectiveness and broaden its applicability in future research.
|
| 410 |
+
|
| 411 |
+
Question 1: In which month did the City Mayor of Philippines first praise Ona?
|
| 412 |
+
```txt
|
| 413 |
+
Action 0 getTime(City_Mayor_(Philippines),Praise_or_endorse,Ona)
|
| 414 |
+
Response 0 entities = ['City_Mayor_(Philippines)', '2014-10-07'])
|
| 415 |
+
Action 1 getFirst({entities})
|
| 416 |
+
Response 1 entities = ['City_Mayor_(Philippines)', '2014-10-07'])
|
| 417 |
+
Action 2 answer(2014-10)
|
| 418 |
+
Response 2 Correct!
|
| 419 |
+
```
|
| 420 |
+
|
| 421 |
+
Question 2: Before Mali's militant, which country was the last to criticise France?
|
| 422 |
+
```txt
|
| 423 |
+
Action 0 getTime(Militant_(Mali),Criticize_or_denounce,France)
|
| 424 |
+
Response 0 entities $=$ ['Militant_(Mali)',2013-01-21')]
|
| 425 |
+
Action 1 getFirst({entities})
|
| 426 |
+
Response 1 entities $=$ ["Government_(Cote_d'Ivoire)",..]
|
| 427 |
+
Action 2 getLast({entities})
|
| 428 |
+
Response 2 entities $=$ ['Royal_Administration_(Spain)',2015-12-24')]
|
| 429 |
+
Action 3 answer(Royal_Administration_(Spain))
|
| 430 |
+
Response 3 Correct!
|
| 431 |
+
```
|
| 432 |
+
|
| 433 |
+
Question 3: Who was the last to give a criticism to Iran before Pervez Musharraf did?
|
| 434 |
+
```csv
|
| 435 |
+
Action 0 getTime(Pervers_Musharraf,Criticize_or_denounce,Iran)
|
| 436 |
+
Response 0 entities $=$ ['Pervez_Musharraf','2006-01-17')]
|
| 437 |
+
Action 1 getHeadEntity(Iran,Criticize_or_denounce,no time)
|
| 438 |
+
Response 1 entities $=$ ['Shirin_Ebadi','2005-01-05'),...]
|
| 439 |
+
Action 2 getBefore({entities},2006-01-17)
|
| 440 |
+
Response 2 entities $=$ ['Shirin_Ebadi','2005-01-05'),...]
|
| 441 |
+
Action 3 getLast({entities})
|
| 442 |
+
Response 3 entities $=$ ['Angela_Merkel','2006-01-16')]
|
| 443 |
+
Action 4 answer(Angela_Merkel)
|
| 444 |
+
Response 4 Correct!
|
| 445 |
+
```
|
| 446 |
+
|
| 447 |
+
Question 4: At what time did Xi Jinping first make optimistic remarks on Japan?
|
| 448 |
+
```txt
|
| 449 |
+
Action 0 getTime(Xi_Jinping,Make_optimistic COMMENT,Japan)
|
| 450 |
+
Response 0 entities $=$ ['Xi_Jinping','2008-04-18'],...]
|
| 451 |
+
Action 1 getFirst({entities})
|
| 452 |
+
Response 1 entities $=$ ['Xi_Jinping','2008-04-18']
|
| 453 |
+
Action 2 answer(2008-04-18)
|
| 454 |
+
Response 2 Correct!
|
| 455 |
+
```
|
| 456 |
+
|
| 457 |
+
Question 5: Who wanted to cooperate with Japan in November, 2005?
|
| 458 |
+
```txt
|
| 459 |
+
Action 0 getHeadEntity(Japan,Expressintent_to_cooperate,2005-11)
|
| 460 |
+
Response 0 entities $=$ ['Government_Official_(Russia)》,...]
|
| 461 |
+
Action 1 answer(South_Korea)
|
| 462 |
+
Response 1 Correct!
|
| 463 |
+
```
|
| 464 |
+
|
| 465 |
+
Table 8: Exemplars of ARI for MULTITQ
|
| 466 |
+
|
| 467 |
+
<table><tr><td>f'' Please use the tool provided below to interact with the knowledge graph. You will find a list of actions categorized into time-based queries, entity queries, and specific time queries. There may be more than one answer to the question, but you only need to answer one correct answer that satisfies the question.</td></tr><tr><td>To solve this question, you need to first identify the entities and relationships in the question, selecting the appropriate actions to retrieve the required information, and finally, providing the correct answer.</td></tr><tr><td>Time-based Queries:</td></tr><tr><td>Retrieve the time of a specific event based on the head/subject entity, relation and tail/object entity by using the $get_time(HEAD, RELATION, TAIL) $ function.</td></tr><tr><td>Identify entities/events that occurred before a given time by using the $get_before(ENTITY_LIST, SPECIFIED_TIME) $ function.</td></tr><tr><td>Identify entities/events that occurred after a given time by using the $get_after(ENTITY_LIST, SPECIFIED_TIME) $ function.</td></tr><tr><td>Identify entities/events that occurred between two specific times by using the $get_between(ENTITY_LIST, START_TIME, END_TIME) $ function.</td></tr><tr><td>Entity Queries:</td></tr><tr><td>Identify the tail/object entity based on the head/subject entity and relation by using the $getTAILEntity(CURRENT_HEAD, RELATION, OPTIONAL_TIMECONSTRAINT) $ function.</td></tr><tr><td>Identify the head/sujct entity based on the tail/object entity and relation by using the $get_head-entity(CURRENTTAIL, RELATION, OPTIONAL_TIMECONSTRAINT) $ function.</td></tr><tr><td>Specific Time Queries:</td></tr><tr><td>Pinpoint entities with the earliest occurrence by using the $get_first(ENTITY_LIST) $ function.</td></tr><tr><td>Identify entities with the latest occurrence by using the $get_last(ENTITY_LIST) $ function.</td></tr><tr><td>To provide your answer, use the $answer(YOUR ANSWER) $ function.</td></tr><tr><td>Note: Always enclose the selected action in $ and provide a reason for your choice if necessary.</td></tr><tr><td>Examples for your reference: {examples} (end of examples)</td></tr><tr><td>Current Challenge:</td></tr><tr><td>Question: {question}</td></tr><tr><td>Methodology: {methodology} (end of methodology)</td></tr><tr><td>Previous Actions: {history} (end of previous actions)</td></tr><tr><td>Available Actions: {actions}</td></tr><tr><td>Choose your next action from the available actions above, ensuring its completeness. If you have found the answer, remember to use the answer function.</td></tr><tr><td>Organize your output by strictly following the format below:</td></tr><tr><td>Action:</td></tr><tr><td><Choose your next action from the available actions above. Note: Always enclose the selected action in $. Replace {your specified time} with a specified time in the format YYYY or YYYY-MM or YYYY-MM-DD></td></tr><tr><td>Reason:</td></tr><tr><td><Explain the reason for choosing this action.>'''</td></tr></table>
|
| 468 |
+
|
| 469 |
+
Figure 8: Prompt for the action selection.
|
| 470 |
+
|
| 471 |
+
f'' Carefully analyze the following correct and incorrect examples. From these, extract and summarize the corresponding patterns and principles. Based on these examples, provide a comprehensive methodology that describes how to correctly tackle this type of problem, highlighting the key steps and common pitfalls to avoid.
|
| 472 |
+
|
| 473 |
+
Task Definition: <Task Definition> (end of Task Definition)
|
| 474 |
+
|
| 475 |
+
Here is an example output:
|
| 476 |
+
|
| 477 |
+
Example 1:
|
| 478 |
+
|
| 479 |
+
Overall methodology Instruction:
|
| 480 |
+
|
| 481 |
+
This type of problem involves the sequential determination of events, e.g. Who { Relation R} {entity C} before {entity B}, to find the answer {entity A} we need to reason in three steps, firstly to determine the specific temporal anchors, i.e., the occurrence time $t$ of {entity B, Relation, and entity C}, and then to find out which head entities have generated a Relation R connection with {entity C}. Then, we find out which head entities and {entity C} have been associated with Relation R, and finally filter out the answers that satisfy the time requirement before $t$ . The specific steps are as follows. The steps are as follows
|
| 482 |
+
|
| 483 |
+
Step-by-step Guide:
|
| 484 |
+
|
| 485 |
+
1. Firstly, use get_time to find the time, $get_time(entity B, Relation R, entity C);
|
| 486 |
+
$, to get the quaternion {entity B, Relation R, entity C, Time t};
|
| 487 |
+
2. use the get_head-entity method to get the head entity, $get_head-entity(ENTITY C, Relation R, entity C), to be able to get a list of quaternions;
|
| 488 |
+
3. use the get_before method to filter the entities that satisfy the constraints, $get_before({entities},t), to be able to obtain a list of entities that satisfy the conditions
|
| 489 |
+
4. complete the reasoning process by answering the found answer $answer(entity A)$
|
| 490 |
+
|
| 491 |
+
(end of example output)
|
| 492 |
+
|
| 493 |
+
Here is the correct samples and incorrect samples for the current question type: Correct samples: {correct/examples}
|
| 494 |
+
|
| 495 |
+
Incorrect samples: {incorrectexamples} (end of samples)
|
| 496 |
+
|
| 497 |
+
Now start writing. Please design a methodology that describes how to correctly tackle this type of problem. The goal is to provide a comprehensive guide that highlights the key steps and common pitfalls to avoid when approaching this type of problem. Organize your output by strictly following the output format as below:
|
| 498 |
+
|
| 499 |
+
Overall Instruction:
|
| 500 |
+
|
| 501 |
+
<Define this methodology in detail. Provide a concise guide or inference. Note that the guidance you provide should be at a methodological level, for this type of question, not for a specific one.>
|
| 502 |
+
|
| 503 |
+
Step-by-step Guide:
|
| 504 |
+
<A step-by-step guide or procedure detailing how to approach and solve this kind of question. Note that the steps proposed should be specific and relevant to this type of question, tell which type of action should use in each step and the reason>
|
| 505 |
+
|
| 506 |
+
Figure 9: Prompt for the abstract methodology instruction generation.
|
2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b3ef9b687f67d62091bcb3f4ba176a54495b6d15113169707ae9f448281690aa
|
| 3 |
+
size 1103670
|
2024/Temporal Knowledge Question Answering via Abstract Reasoning Induction/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Text Embedding Inversion Security for Multilingual Language Models/d5a1911b-6437-4a76-99a3-4662bc3b1fe0_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Text Embedding Inversion Security for Multilingual Language Models/d5a1911b-6437-4a76-99a3-4662bc3b1fe0_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Text Embedding Inversion Security for Multilingual Language Models/d5a1911b-6437-4a76-99a3-4662bc3b1fe0_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d38662de567c5b4854f8f9de0cc9d17d28bc6615dbba4d467c9fd4eae714d451
|
| 3 |
+
size 1362567
|
2024/Text Embedding Inversion Security for Multilingual Language Models/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Text Embedding Inversion Security for Multilingual Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:90f8d3d0b973429961628fe32e4a1fd758bcf6ce82b2f512535ee3b16a857a5d
|
| 3 |
+
size 1850109
|
2024/Text Embedding Inversion Security for Multilingual Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Text-like Encoding of Collaborative Information in Large Language Models for Recommendation/7a9b4ddb-f1a8-4364-9a24-e043e4b29411_content_list.json
ADDED
|
@@ -0,0 +1,1739 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Text-like Encoding of Collaborative Information in Large Language Models for Recommendation",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
147,
|
| 8 |
+
84,
|
| 9 |
+
850,
|
| 10 |
+
124
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Yang Zhang $^{1}$ , Keqin Bao $^{1}$ , Ming Yan $^{1}$ , Wenjie Wang $^{2}$ , Fuli Feng $^{1*}$ , Xiangnan He $^{1*}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
268,
|
| 19 |
+
135,
|
| 20 |
+
732,
|
| 21 |
+
168
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "<sup>1</sup>University of Science and Technology of China, <sup>2</sup>National University of Singapore",
|
| 28 |
+
"bbox": [
|
| 29 |
+
299,
|
| 30 |
+
170,
|
| 31 |
+
695,
|
| 32 |
+
204
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "zyang1580@gmail.com, {baokq, ym689} $@$ mail.ustc.edu.cn, {wenjiewang96,fulifeng93,xiangnanhe} $@$ gmail.com",
|
| 39 |
+
"bbox": [
|
| 40 |
+
310,
|
| 41 |
+
206,
|
| 42 |
+
682,
|
| 43 |
+
236
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Abstract",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
260,
|
| 53 |
+
260,
|
| 54 |
+
339,
|
| 55 |
+
275
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "When adapting Large Language Models for Recommendation (LLMRec), it is crucial to integrate collaborative information. Existing methods achieve this by learning collaborative embeddings in LLMs' latent space from scratch or by mapping from external models. However, they fail to represent the information in a text-like format, which may not align optimally with LLMs. To bridge this gap, we introduce BinLLM, a novel LLMRec method that seamlessly integrates collaborative information through text-like encoding. BinLLM converts collaborative embeddings from external models into binary sequences — a specific text format that LLMs can understand and operate on directly, facilitating the direct usage of collaborative information in text-like format by LLMs. Additionally, BinLLM provides options to compress the binary sequence using dot-decimal notation to avoid excessively long lengths. Extensive experiments validate that BinLLM introduces collaborative information in a manner better aligned with LLMs, resulting in enhanced performance. We release our code at https://github.com/zyang1580/BinLLM.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
144,
|
| 64 |
+
285,
|
| 65 |
+
460,
|
| 66 |
+
640
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1 Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
650,
|
| 77 |
+
258,
|
| 78 |
+
665
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Due to the remarkable power of large language models (LLMs), there is a growing focus on adapting them for recommender systems (LLMRec), which has seen significant progress in the past year (Bao et al., 2023b,a,c; Harte et al., 2023; Rajput et al., 2023; Wei et al., 2024). In recommendation, collaborative information, which delineates the co-occurrence patterns among user-item interactions, has emerged as a pivotal component in modeling user interests, especially for active users and items (Zhang et al., 2023b). However, this information exists in a different modality from textual data and thus presents a challenge in directly leveraged by LLMs like textual information (Zhang",
|
| 85 |
+
"bbox": [
|
| 86 |
+
112,
|
| 87 |
+
675,
|
| 88 |
+
489,
|
| 89 |
+
901
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "et al., 2023b; Li et al., 2023b; Bao et al., 2023a). To enhance recommendation quality, it is undoubtedly crucial to seamlessly integrate collaborative information into LLMs.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
507,
|
| 98 |
+
261,
|
| 99 |
+
884,
|
| 100 |
+
324
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "To date, two integration strategies have emerged. The first strategy resembles latent factor models (Koren et al., 2009) by incorporating additional tokens and corresponding embeddings into LLMs to represent users and items, subsequently fitting interaction data to implicitly capture collaborative information within the embeddings (Zheng et al., 2024a; Hua et al., 2023). However, this approach suffers from low learning efficacy due to the inherent low-rank nature of the information, leading to tokenization redundancy within LLMs (Deletang et al., 2024; Zhang et al., 2023b). To address these challenges, an alternative approach leverages an external latent factor model to capture the information, which is then mapped into the LLM token embedding space (Zhang et al., 2023b; Li et al., 2023c; Liao et al., 2024), circumventing the need to learn it from scratch. While effective, this method introduces the additional overhead of training the mapping model.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
325,
|
| 110 |
+
884,
|
| 111 |
+
646
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "Whether learning collaborative information directly from scratch in the LLM token embedding space or mapping it from external models, the resulting representations diverge significantly from the LLM's original textual-level encoding. This, to a certain extent, hampers the full utilization of LLMs' capabilities, as LLMs are initially trained on textual data and excel at processing textually encoded information. For instance, introducing new tokens alters the generative space of LLMs, potentially compromising their original functionalities, let alone capitalizing on their capabilities. Therefore, exploring text-like encoding of collaborative information in LLMs holds immense promise. Nevertheless, it poses challenges due to the inherent differences between textual and collaborative information modalities (Zhang et al., 2023b).",
|
| 118 |
+
"bbox": [
|
| 119 |
+
507,
|
| 120 |
+
648,
|
| 121 |
+
885,
|
| 122 |
+
921
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "page_footnote",
|
| 128 |
+
"text": "*Corresponding authors.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
139,
|
| 131 |
+
906,
|
| 132 |
+
292,
|
| 133 |
+
920
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "page_number",
|
| 139 |
+
"text": "9181",
|
| 140 |
+
"bbox": [
|
| 141 |
+
478,
|
| 142 |
+
927,
|
| 143 |
+
517,
|
| 144 |
+
940
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "footer",
|
| 150 |
+
"text": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9181-9191 August 11-16, 2024 ©2024 Association for Computational Linguistics",
|
| 151 |
+
"bbox": [
|
| 152 |
+
89,
|
| 153 |
+
945,
|
| 154 |
+
905,
|
| 155 |
+
971
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "In this study, we delve into the central theme of encoding collaborative information in LLMs for recommendation, an area of promise yet not explored in LLMRec. The crux lies in transforming collaborative information into a sequence formatted like text. We believe this text-like sequence need not be comprehensible to humans; rather, it should be interpretable by LLMs for effective utilization, such as facilitating reasoning tasks like discerning user and item similarities through sequence comparisons. Thus, this text sequence does not necessarily have to adhere to conventional natural language patterns.",
|
| 162 |
+
"bbox": [
|
| 163 |
+
112,
|
| 164 |
+
84,
|
| 165 |
+
492,
|
| 166 |
+
294
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 1
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"text": "To this end, we introduce BinLLM, an innovative LLMRec approach that integrates collaborative information into LLMs using a text-like encoding strategy. We transform the collaborative embeddings obtained from external models into binary sequences, treating them as textual features directly usable by LLMs. This design is motivated by two primary considerations: 1) the feasibility of binarizing collaborative embeddings without compromising performance (Tan et al., 2020); 2) LLMs can naturally perform bitwise operations or do so after instruction tuning (Savelka et al., 2023), enabling the comparison of similarities between binarized sequences. Taking a step further, we explore representing the binary sequence in dot-decimal notation (Abusafat et al., 2021), resulting in shorter representations, akin to converting binary sequences to IPv4 addresses. By fine-tuning LLMs with recommendation instruction data containing such encoded collaborative information, we could leverage both textual semantics and collaborative data for recommendation without modifying the LLMs.",
|
| 173 |
+
"bbox": [
|
| 174 |
+
115,
|
| 175 |
+
294,
|
| 176 |
+
490,
|
| 177 |
+
646
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 1
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"text": "The main contributions of this work are summarized as follows:",
|
| 184 |
+
"bbox": [
|
| 185 |
+
112,
|
| 186 |
+
648,
|
| 187 |
+
489,
|
| 188 |
+
678
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 1
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "list",
|
| 194 |
+
"sub_type": "text",
|
| 195 |
+
"list_items": [
|
| 196 |
+
"- We emphasize the significance of text-like encoding for collaborative information in LLMRec to enhance alignment with LLMs.",
|
| 197 |
+
"- We introduce BinLLM, a novel method that efficiently encodes collaborative information textually for LLMs by converting collaborative embeddings into binary sequences.",
|
| 198 |
+
"- We perform comprehensive experiments on two datasets, showcasing the effectiveness of our approach through extensive results."
|
| 199 |
+
],
|
| 200 |
+
"bbox": [
|
| 201 |
+
112,
|
| 202 |
+
687,
|
| 203 |
+
487,
|
| 204 |
+
852
|
| 205 |
+
],
|
| 206 |
+
"page_idx": 1
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"type": "text",
|
| 210 |
+
"text": "2 Methodology",
|
| 211 |
+
"text_level": 1,
|
| 212 |
+
"bbox": [
|
| 213 |
+
112,
|
| 214 |
+
864,
|
| 215 |
+
265,
|
| 216 |
+
881
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 1
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "text",
|
| 222 |
+
"text": "In this section, we introduce our BinLLM method, starting with presenting the model architecture and",
|
| 223 |
+
"bbox": [
|
| 224 |
+
112,
|
| 225 |
+
889,
|
| 226 |
+
489,
|
| 227 |
+
921
|
| 228 |
+
],
|
| 229 |
+
"page_idx": 1
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"type": "text",
|
| 233 |
+
"text": "Table 1: Example of the used prompt template, using the same format as CoLLM.",
|
| 234 |
+
"bbox": [
|
| 235 |
+
507,
|
| 236 |
+
82,
|
| 237 |
+
882,
|
| 238 |
+
109
|
| 239 |
+
],
|
| 240 |
+
"page_idx": 1
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"type": "text",
|
| 244 |
+
"text": "#Question: A user has given high ratings to the following books: <ItemTitleList>. Additionally, we have information about the user's preferences encoded in the feature <UserID>. Using all available information, make a prediction about whether the user would enjoy the book titled <TargetItemTitle> with the feature <TargetItemID?> Answer with \"Yes\" or \"No\". \\n#Answer:",
|
| 245 |
+
"bbox": [
|
| 246 |
+
515,
|
| 247 |
+
123,
|
| 248 |
+
885,
|
| 249 |
+
252
|
| 250 |
+
],
|
| 251 |
+
"page_idx": 1
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"type": "text",
|
| 255 |
+
"text": "followed by a description of the tuning method.",
|
| 256 |
+
"bbox": [
|
| 257 |
+
507,
|
| 258 |
+
279,
|
| 259 |
+
863,
|
| 260 |
+
294
|
| 261 |
+
],
|
| 262 |
+
"page_idx": 1
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"type": "text",
|
| 266 |
+
"text": "2.1 Model Architecture",
|
| 267 |
+
"text_level": 1,
|
| 268 |
+
"bbox": [
|
| 269 |
+
507,
|
| 270 |
+
307,
|
| 271 |
+
710,
|
| 272 |
+
321
|
| 273 |
+
],
|
| 274 |
+
"page_idx": 1
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"type": "text",
|
| 278 |
+
"text": "Figure 1 depicts the model architecture of BinLLM, comprising two main components: prompt generation and LLM prediction. Similar to previous approaches, we convert recommendation data into prompts and then input them directly into LLMs for prediction. However, the key distinction of Bin-LLM is that it represents collaborative information in a text-like format by converting collaborative embeddings into binary sequences. We next delve into the specifics of these two components.",
|
| 279 |
+
"bbox": [
|
| 280 |
+
507,
|
| 281 |
+
328,
|
| 282 |
+
885,
|
| 283 |
+
489
|
| 284 |
+
],
|
| 285 |
+
"page_idx": 1
|
| 286 |
+
},
|
| 287 |
+
{
|
| 288 |
+
"type": "text",
|
| 289 |
+
"text": "2.1.1 Prompt Construction",
|
| 290 |
+
"text_level": 1,
|
| 291 |
+
"bbox": [
|
| 292 |
+
507,
|
| 293 |
+
499,
|
| 294 |
+
737,
|
| 295 |
+
514
|
| 296 |
+
],
|
| 297 |
+
"page_idx": 1
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"type": "text",
|
| 301 |
+
"text": "As depicted in Figure 1, we construct prompts using a template featuring empty fields, encompassing both textual fields (e.g., \"<ItemTitleList>\") and ID fields (e.g., \"<UserID>\") See the template example in Table 1. By populating these fields with corresponding users' data, we can generate personalized prompts for recommendation purposes. The textual fields are utilized to incorporate textual information, which can be directly filled with corresponding textual data from the recommendation dataset, such as historical item titles in the \"<ItemTitleList>\" fields. The ID fields are designated for embedding collaborative information, which is acquired through a Text-like Encoding (TE) module. Next, we delve into the encoding process of collaborative information.",
|
| 302 |
+
"bbox": [
|
| 303 |
+
507,
|
| 304 |
+
519,
|
| 305 |
+
885,
|
| 306 |
+
774
|
| 307 |
+
],
|
| 308 |
+
"page_idx": 1
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"type": "text",
|
| 312 |
+
"text": "Text-like Encoding of Collaborative Information. To better integrate with LLMs, we aim to encode collaborative information in a text-like format. To accomplish this, we convert collaborative information into a binary sequence, enabling LLMs to perform bitwise operations for reasoning. The encoding model involves two components: 1) Collaborative Model, a conventional latent factor module capable of encoding collaborative information",
|
| 313 |
+
"bbox": [
|
| 314 |
+
507,
|
| 315 |
+
777,
|
| 316 |
+
885,
|
| 317 |
+
921
|
| 318 |
+
],
|
| 319 |
+
"page_idx": 1
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"type": "page_number",
|
| 323 |
+
"text": "9182",
|
| 324 |
+
"bbox": [
|
| 325 |
+
480,
|
| 326 |
+
927,
|
| 327 |
+
521,
|
| 328 |
+
940
|
| 329 |
+
],
|
| 330 |
+
"page_idx": 1
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"type": "image",
|
| 334 |
+
"img_path": "images/1b8331a11e57e9fa22b2779bc1b7501052356bb47e420354bb5adce35f28a04d.jpg",
|
| 335 |
+
"image_caption": [
|
| 336 |
+
"Figure 1: Model architecture overview of our BinLLM. The purple line is used to fill the text fields in the prompt template, introducing textual information like item titles, while the red line is used to fill the ID fields in the prompt template, introducing collaborative information."
|
| 337 |
+
],
|
| 338 |
+
"image_footnote": [],
|
| 339 |
+
"bbox": [
|
| 340 |
+
141,
|
| 341 |
+
84,
|
| 342 |
+
858,
|
| 343 |
+
318
|
| 344 |
+
],
|
| 345 |
+
"page_idx": 2
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"type": "text",
|
| 349 |
+
"text": "as numerical latent vectors (i.e., collaborative embeddings). 2) Binarization & Compression Module, utilized to transform collaborative embeddings into binary sequences or further compressed formats.",
|
| 350 |
+
"bbox": [
|
| 351 |
+
112,
|
| 352 |
+
397,
|
| 353 |
+
487,
|
| 354 |
+
461
|
| 355 |
+
],
|
| 356 |
+
"page_idx": 2
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"type": "text",
|
| 360 |
+
"text": "- Collaborative model. Given a user $u$ and an item $i$ , the collaborative model generates corresponding embeddings for them, denoted as $e_{u}$ and $e_{i}$ , respectively. Formally,",
|
| 361 |
+
"bbox": [
|
| 362 |
+
112,
|
| 363 |
+
462,
|
| 364 |
+
489,
|
| 365 |
+
526
|
| 366 |
+
],
|
| 367 |
+
"page_idx": 2
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"type": "equation",
|
| 371 |
+
"text": "\n$$\n\\boldsymbol {e} _ {u} = f _ {c} (u; \\theta) \\tag {1}\n$$\n",
|
| 372 |
+
"text_format": "latex",
|
| 373 |
+
"bbox": [
|
| 374 |
+
245,
|
| 375 |
+
533,
|
| 376 |
+
487,
|
| 377 |
+
558
|
| 378 |
+
],
|
| 379 |
+
"page_idx": 2
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "equation",
|
| 383 |
+
"text": "\n$$\n\\pmb {e} _ {i} = f _ {c} (i; \\theta),\n$$\n",
|
| 384 |
+
"text_format": "latex",
|
| 385 |
+
"bbox": [
|
| 386 |
+
248,
|
| 387 |
+
556,
|
| 388 |
+
352,
|
| 389 |
+
571
|
| 390 |
+
],
|
| 391 |
+
"page_idx": 2
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"type": "text",
|
| 395 |
+
"text": "where $f_{c}$ represents the collaborative model parameterized by $\\theta$ . Here, $\\pmb{e}_u \\in \\mathcal{R}^d$ and $\\pmb{e}_i \\in \\mathcal{R}^d$ are $d$ -dimensional embeddings that encode collaborative information for the user and item, respectively.",
|
| 396 |
+
"bbox": [
|
| 397 |
+
112,
|
| 398 |
+
579,
|
| 399 |
+
489,
|
| 400 |
+
643
|
| 401 |
+
],
|
| 402 |
+
"page_idx": 2
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"type": "text",
|
| 406 |
+
"text": "- Binarization & compression. After obtaining the collaborative embeddings, this component is used to convert them into binary sequences, with the option to compress the sequences.",
|
| 407 |
+
"bbox": [
|
| 408 |
+
112,
|
| 409 |
+
644,
|
| 410 |
+
487,
|
| 411 |
+
707
|
| 412 |
+
],
|
| 413 |
+
"page_idx": 2
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"type": "text",
|
| 417 |
+
"text": "Binarization. To binarize the collaborative embeddings, we generally follow the mechanism proposed by Tan et al. (2020). Firstly, we transform the collaborative embeddings into a suitable space using a fully connected layer and then apply the sign function to obtain the binary results. Formally, for collaborative embeddings $\\mathbf{e}_u$ and $\\mathbf{e}_i$ of user $u$ and item $i$ , they are converted into binary sequences as follows:",
|
| 418 |
+
"bbox": [
|
| 419 |
+
112,
|
| 420 |
+
708,
|
| 421 |
+
489,
|
| 422 |
+
851
|
| 423 |
+
],
|
| 424 |
+
"page_idx": 2
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"type": "equation",
|
| 428 |
+
"text": "\n$$\n\\boldsymbol {h} _ {u} = \\operatorname {s i g n} \\left(\\sigma \\left(W e _ {u} + b\\right)\\right) \\tag {2}\n$$\n",
|
| 429 |
+
"text_format": "latex",
|
| 430 |
+
"bbox": [
|
| 431 |
+
200,
|
| 432 |
+
857,
|
| 433 |
+
487,
|
| 434 |
+
882
|
| 435 |
+
],
|
| 436 |
+
"page_idx": 2
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "equation",
|
| 440 |
+
"text": "\n$$\n\\boldsymbol {h} _ {i} = \\operatorname {s i g n} \\left(\\sigma \\left(W \\boldsymbol {e} _ {i} + b\\right)\\right),\n$$\n",
|
| 441 |
+
"text_format": "latex",
|
| 442 |
+
"bbox": [
|
| 443 |
+
201,
|
| 444 |
+
878,
|
| 445 |
+
394,
|
| 446 |
+
894
|
| 447 |
+
],
|
| 448 |
+
"page_idx": 2
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"type": "text",
|
| 452 |
+
"text": "where $\\pmb{h}_u \\in \\{0,1\\}^d$ and $\\pmb{h}_i \\in \\{0,1\\}^d$ denote the",
|
| 453 |
+
"bbox": [
|
| 454 |
+
112,
|
| 455 |
+
904,
|
| 456 |
+
485,
|
| 457 |
+
921
|
| 458 |
+
],
|
| 459 |
+
"page_idx": 2
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"type": "text",
|
| 463 |
+
"text": "obtained binary representation of collaborative information for the user and item, respectively. Here, $W \\in \\mathcal{R}^{d \\times d}$ and $b \\in \\mathcal{R}^d$ are the weights and bias for the fully connected layer, $\\sigma(\\cdot)$ represents the tanh activation function, and $\\mathrm{sign}(\\cdot)$ denotes the sign function. For a numerical value $x$ , we have:",
|
| 464 |
+
"bbox": [
|
| 465 |
+
507,
|
| 466 |
+
397,
|
| 467 |
+
884,
|
| 468 |
+
494
|
| 469 |
+
],
|
| 470 |
+
"page_idx": 2
|
| 471 |
+
},
|
| 472 |
+
{
|
| 473 |
+
"type": "equation",
|
| 474 |
+
"text": "\n$$\n\\operatorname {s i g n} (x) = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} x > 0 \\\\ 0, & \\text {e l s e} \\end{array} \\right. \\tag {3}\n$$\n",
|
| 475 |
+
"text_format": "latex",
|
| 476 |
+
"bbox": [
|
| 477 |
+
579,
|
| 478 |
+
501,
|
| 479 |
+
882,
|
| 480 |
+
544
|
| 481 |
+
],
|
| 482 |
+
"page_idx": 2
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"type": "text",
|
| 486 |
+
"text": "Through this method, we convert the numerical collaborative embeddings into binary sequences (e.g., '010110....'). These sequences can be directly inputted into LLMs and utilized for operations such as computing logical 'AND', thereby aiding in user preference reasoning.",
|
| 487 |
+
"bbox": [
|
| 488 |
+
507,
|
| 489 |
+
551,
|
| 490 |
+
882,
|
| 491 |
+
646
|
| 492 |
+
],
|
| 493 |
+
"page_idx": 2
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"type": "text",
|
| 497 |
+
"text": "Compression. A limitation of binary sequences is their relatively long length, which poses a challenge for LLMs not proficient in handling lengthy sequences. Moreover, long sequences can constrain the inference efficiency of LLMRec. We thus consider compressing the binary sequences while keeping them leverageable by LLMs. Given that IPv4 (Peterson and Davie, 2007) is originally encoded from binary sequences and the Web includes sufficient knowledge about IPv4, the LLMs trained on the Web data could potentially understand the dot-decimal notation used by IPv4. Therefore, we consider compressing the binary embeddings in dot-decimal notations (Abusafat et al., 2021). We convert every eight binary digits into a decimal number, ranging from 0 to 255, and use the full stop (dot) as a separation character. Here is an",
|
| 498 |
+
"bbox": [
|
| 499 |
+
507,
|
| 500 |
+
648,
|
| 501 |
+
882,
|
| 502 |
+
921
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 2
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "page_number",
|
| 508 |
+
"text": "9183",
|
| 509 |
+
"bbox": [
|
| 510 |
+
480,
|
| 511 |
+
927,
|
| 512 |
+
519,
|
| 513 |
+
940
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 2
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "text",
|
| 519 |
+
"text": "example of compressing a 32-bit binary sequence:",
|
| 520 |
+
"bbox": [
|
| 521 |
+
112,
|
| 522 |
+
84,
|
| 523 |
+
487,
|
| 524 |
+
99
|
| 525 |
+
],
|
| 526 |
+
"page_idx": 3
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "equation",
|
| 530 |
+
"text": "\n$$\n\\underbrace {1 0 1 0 1 1 0 0} _ {1 7 2 .} \\underbrace {0 0 0 1 0 0 0 0} _ {1 6 .} \\underbrace {1 1 1 1 1 1 0} _ {2 5 4 .} \\underbrace {0 0 0 0 0 0 0 1} _ {1}. \\tag {4}\n$$\n",
|
| 531 |
+
"text_format": "latex",
|
| 532 |
+
"bbox": [
|
| 533 |
+
132,
|
| 534 |
+
115,
|
| 535 |
+
487,
|
| 536 |
+
148
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 3
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "text",
|
| 542 |
+
"text": "Here, \"172.16.254.1\" is the compressed result, which significantly reduces the representation length. Notably, the compression is optional, and its usage depends on the length of the original binary sequence.",
|
| 543 |
+
"bbox": [
|
| 544 |
+
112,
|
| 545 |
+
159,
|
| 546 |
+
489,
|
| 547 |
+
240
|
| 548 |
+
],
|
| 549 |
+
"page_idx": 3
|
| 550 |
+
},
|
| 551 |
+
{
|
| 552 |
+
"type": "text",
|
| 553 |
+
"text": "2.1.2 LLM Prediction",
|
| 554 |
+
"text_level": 1,
|
| 555 |
+
"bbox": [
|
| 556 |
+
112,
|
| 557 |
+
252,
|
| 558 |
+
302,
|
| 559 |
+
266
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 3
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "text",
|
| 565 |
+
"text": "Once the empty fields in the prompt template are filled, the resulting prompt is fed into the LLMs for prediction. Similar to prior research (Bao et al., 2023b; Zhang et al., 2023b), given the absence of specific recommendation pre-training in LLMs, we introduce an additional LoRA module (Hu et al., 2022) for recommendation prediction. Formally, for a generated prompt $p$ , the prediction can be formulated as:",
|
| 566 |
+
"bbox": [
|
| 567 |
+
112,
|
| 568 |
+
272,
|
| 569 |
+
489,
|
| 570 |
+
416
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 3
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "equation",
|
| 576 |
+
"text": "\n$$\n\\hat {y} = L L M _ {\\hat {\\Phi} + \\Phi^ {\\prime}} (p), \\tag {5}\n$$\n",
|
| 577 |
+
"text_format": "latex",
|
| 578 |
+
"bbox": [
|
| 579 |
+
225,
|
| 580 |
+
432,
|
| 581 |
+
487,
|
| 582 |
+
451
|
| 583 |
+
],
|
| 584 |
+
"page_idx": 3
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"type": "text",
|
| 588 |
+
"text": "where $\\hat{\\Phi}$ represents the pre-trained LLM's parameters, $\\Phi^{\\prime}$ denotes the LoRA model parameters, and $\\hat{y}$ represents the prediction results, which could be the predicted next item or the predicted likelihood of liking a candidate item, depending on the task.",
|
| 589 |
+
"bbox": [
|
| 590 |
+
112,
|
| 591 |
+
467,
|
| 592 |
+
489,
|
| 593 |
+
548
|
| 594 |
+
],
|
| 595 |
+
"page_idx": 3
|
| 596 |
+
},
|
| 597 |
+
{
|
| 598 |
+
"type": "text",
|
| 599 |
+
"text": "2.2 Training",
|
| 600 |
+
"text_level": 1,
|
| 601 |
+
"bbox": [
|
| 602 |
+
112,
|
| 603 |
+
561,
|
| 604 |
+
230,
|
| 605 |
+
577
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 3
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "In our model architecture, two modules require training: the text-like encoding module and the LoRA module. The tuning for the text-like encoding module focuses on learning to generate the binary sequence for collaborative information, independent of the LLMs. The tuning for LoRA aims to instruct the LLM in making recommendations by leveraging collaborative information. We now present the two tuning paradigms, respectively.",
|
| 612 |
+
"bbox": [
|
| 613 |
+
112,
|
| 614 |
+
583,
|
| 615 |
+
489,
|
| 616 |
+
730
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "2.2.1 Pre-training for Text-like Encoding",
|
| 623 |
+
"text_level": 1,
|
| 624 |
+
"bbox": [
|
| 625 |
+
112,
|
| 626 |
+
741,
|
| 627 |
+
452,
|
| 628 |
+
757
|
| 629 |
+
],
|
| 630 |
+
"page_idx": 3
|
| 631 |
+
},
|
| 632 |
+
{
|
| 633 |
+
"type": "text",
|
| 634 |
+
"text": "To train the text-like encoding module, we directly utilize the binarized representation from Equation (2) to fit the training data. Formally, let $\\mathcal{D}$ denote the training data, and $(u,i,t)\\in \\mathcal{D}$ denote an interaction between user $u$ and item $i$ with label $t$ . We train the module by minimizing the following optimization problem:",
|
| 635 |
+
"bbox": [
|
| 636 |
+
112,
|
| 637 |
+
760,
|
| 638 |
+
489,
|
| 639 |
+
873
|
| 640 |
+
],
|
| 641 |
+
"page_idx": 3
|
| 642 |
+
},
|
| 643 |
+
{
|
| 644 |
+
"type": "equation",
|
| 645 |
+
"text": "\n$$\n\\underset {\\theta , W, b} {\\text {m i n i m i z e}} \\sum_ {(u, i, t) \\in \\mathcal {D}} \\ell \\left(t, \\boldsymbol {h} _ {u} ^ {\\top} \\boldsymbol {h} _ {i}\\right), \\tag {6}\n$$\n",
|
| 646 |
+
"text_format": "latex",
|
| 647 |
+
"bbox": [
|
| 648 |
+
181,
|
| 649 |
+
888,
|
| 650 |
+
487,
|
| 651 |
+
923
|
| 652 |
+
],
|
| 653 |
+
"page_idx": 3
|
| 654 |
+
},
|
| 655 |
+
{
|
| 656 |
+
"type": "text",
|
| 657 |
+
"text": "where $\\{\\theta, W, b\\}$ denote the model parameters in our text-like encoding module as discussed in Section 2.1.1, $h_u$ and $h_i$ denote the binary representations obtained from Equation (2), $h_u^\\top h_i$ represents the predicted likelihood of user $u$ liking item $i$ , and $\\ell(\\cdot)$ denotes the common recommendation loss, in this work, the binary cross-entropy loss.",
|
| 658 |
+
"bbox": [
|
| 659 |
+
507,
|
| 660 |
+
84,
|
| 661 |
+
884,
|
| 662 |
+
197
|
| 663 |
+
],
|
| 664 |
+
"page_idx": 3
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"type": "text",
|
| 668 |
+
"text": "Notably, the sign function lacks smoothness, and its gradient is ill-defined as zero, posing an apparent challenge for back-propagation. To enable training the model in an end-to-end fashion, we approximate the gradient using the straight-through estimator (STE), following the approach outlined by Tan et al. (2020). That is, we directly use the gradients of the output as the gradients of the input for the sign function.",
|
| 669 |
+
"bbox": [
|
| 670 |
+
507,
|
| 671 |
+
198,
|
| 672 |
+
884,
|
| 673 |
+
342
|
| 674 |
+
],
|
| 675 |
+
"page_idx": 3
|
| 676 |
+
},
|
| 677 |
+
{
|
| 678 |
+
"type": "text",
|
| 679 |
+
"text": "2.2.2 LoRA Tuning",
|
| 680 |
+
"text_level": 1,
|
| 681 |
+
"bbox": [
|
| 682 |
+
507,
|
| 683 |
+
354,
|
| 684 |
+
680,
|
| 685 |
+
370
|
| 686 |
+
],
|
| 687 |
+
"page_idx": 3
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "text",
|
| 691 |
+
"text": "To tune the LoRA module, we consider two tuning methods: intuitive tuning and two-step tuning.",
|
| 692 |
+
"bbox": [
|
| 693 |
+
507,
|
| 694 |
+
375,
|
| 695 |
+
880,
|
| 696 |
+
407
|
| 697 |
+
],
|
| 698 |
+
"page_idx": 3
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "text",
|
| 702 |
+
"text": "Intuitive tuning: This method directly tunes the LoRA module from scratch with the prompts that contain the collaborative information.",
|
| 703 |
+
"bbox": [
|
| 704 |
+
507,
|
| 705 |
+
409,
|
| 706 |
+
880,
|
| 707 |
+
455
|
| 708 |
+
],
|
| 709 |
+
"page_idx": 3
|
| 710 |
+
},
|
| 711 |
+
{
|
| 712 |
+
"type": "text",
|
| 713 |
+
"text": "Two-step tuning: In intuitive tuning, a potential challenge arises in scenarios like rating prediction tasks, where binary representations can serve as highly effective features with relatively low learning complexity<sup>2</sup>. Incorporating collaborative information from scratch might cause the model to overly depend on these features, potentially neglecting other attributes akin to learning shortcut features. To address this, we propose an additional two-step tuning strategy. Initially, we train the model using a prompt that excludes collaborative information. Subsequently, we refine the model further by fine-tuning it using the complete prompt that contains the collaborative information.",
|
| 714 |
+
"bbox": [
|
| 715 |
+
507,
|
| 716 |
+
458,
|
| 717 |
+
882,
|
| 718 |
+
682
|
| 719 |
+
],
|
| 720 |
+
"page_idx": 3
|
| 721 |
+
},
|
| 722 |
+
{
|
| 723 |
+
"type": "text",
|
| 724 |
+
"text": "3 Experiments",
|
| 725 |
+
"text_level": 1,
|
| 726 |
+
"bbox": [
|
| 727 |
+
507,
|
| 728 |
+
697,
|
| 729 |
+
655,
|
| 730 |
+
714
|
| 731 |
+
],
|
| 732 |
+
"page_idx": 3
|
| 733 |
+
},
|
| 734 |
+
{
|
| 735 |
+
"type": "text",
|
| 736 |
+
"text": "In this section, we conduct experiments to answer the following research questions:",
|
| 737 |
+
"bbox": [
|
| 738 |
+
507,
|
| 739 |
+
726,
|
| 740 |
+
880,
|
| 741 |
+
757
|
| 742 |
+
],
|
| 743 |
+
"page_idx": 3
|
| 744 |
+
},
|
| 745 |
+
{
|
| 746 |
+
"type": "text",
|
| 747 |
+
"text": "RQ1: Does BinLLM effectively incorporate collaborative information into LLMs to improve recommendation performance? How does its performance compare with that of existing methods?",
|
| 748 |
+
"bbox": [
|
| 749 |
+
507,
|
| 750 |
+
758,
|
| 751 |
+
884,
|
| 752 |
+
822
|
| 753 |
+
],
|
| 754 |
+
"page_idx": 3
|
| 755 |
+
},
|
| 756 |
+
{
|
| 757 |
+
"type": "page_footnote",
|
| 758 |
+
"text": "${}^{1}$ Notably, during training, we will convert the binary values of 0 to -1 for ${\\mathbf{h}}_{u}$ and ${\\mathbf{h}}_{i}$ following the approach used in prior work (Tan et al., 2020).",
|
| 759 |
+
"bbox": [
|
| 760 |
+
507,
|
| 761 |
+
834,
|
| 762 |
+
882,
|
| 763 |
+
870
|
| 764 |
+
],
|
| 765 |
+
"page_idx": 3
|
| 766 |
+
},
|
| 767 |
+
{
|
| 768 |
+
"type": "page_footnote",
|
| 769 |
+
"text": "2Because the model could achieve satisfactory results by solely performing bitwise \"AND\" operations on the collaborative representations of the given user and candidate item, referencing the learning process of binary representation.",
|
| 770 |
+
"bbox": [
|
| 771 |
+
507,
|
| 772 |
+
871,
|
| 773 |
+
882,
|
| 774 |
+
920
|
| 775 |
+
],
|
| 776 |
+
"page_idx": 3
|
| 777 |
+
},
|
| 778 |
+
{
|
| 779 |
+
"type": "page_number",
|
| 780 |
+
"text": "9184",
|
| 781 |
+
"bbox": [
|
| 782 |
+
480,
|
| 783 |
+
928,
|
| 784 |
+
521,
|
| 785 |
+
940
|
| 786 |
+
],
|
| 787 |
+
"page_idx": 3
|
| 788 |
+
},
|
| 789 |
+
{
|
| 790 |
+
"type": "table",
|
| 791 |
+
"img_path": "images/d7edcb329621ed845325087457b31415c666de0945a9dbf3e4981839bafba831.jpg",
|
| 792 |
+
"table_caption": [
|
| 793 |
+
"Table 2: Statistics of the processed datasets."
|
| 794 |
+
],
|
| 795 |
+
"table_footnote": [],
|
| 796 |
+
"table_body": "<table><tr><td>Dataset</td><td>#Train</td><td>#Valid</td><td>#Test</td><td>#User</td><td>#Item</td></tr><tr><td>ML-1M</td><td>33,891</td><td>10,401</td><td>7,331</td><td>839</td><td>3,256</td></tr><tr><td>Amazon-Book</td><td>727,468</td><td>25,747</td><td>25,747</td><td>22,967</td><td>34,154</td></tr></table>",
|
| 797 |
+
"bbox": [
|
| 798 |
+
119,
|
| 799 |
+
105,
|
| 800 |
+
489,
|
| 801 |
+
143
|
| 802 |
+
],
|
| 803 |
+
"page_idx": 4
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "text",
|
| 807 |
+
"text": "RQ2: How do our design choices influence the performance of the proposed method BinLLM?",
|
| 808 |
+
"bbox": [
|
| 809 |
+
112,
|
| 810 |
+
168,
|
| 811 |
+
487,
|
| 812 |
+
202
|
| 813 |
+
],
|
| 814 |
+
"page_idx": 4
|
| 815 |
+
},
|
| 816 |
+
{
|
| 817 |
+
"type": "text",
|
| 818 |
+
"text": "3.1 Experimental Settings",
|
| 819 |
+
"text_level": 1,
|
| 820 |
+
"bbox": [
|
| 821 |
+
112,
|
| 822 |
+
214,
|
| 823 |
+
334,
|
| 824 |
+
230
|
| 825 |
+
],
|
| 826 |
+
"page_idx": 4
|
| 827 |
+
},
|
| 828 |
+
{
|
| 829 |
+
"type": "text",
|
| 830 |
+
"text": "Recommendation Task. Given that this is an initial exploration of text-like encoding for collaborative information, our experiments primarily concentrate on the click/rating prediction task, with other recommendation tasks being ignored. Specifically, we aim to predict whether a user $u$ (comprising other profile information such as historical interactions) would click on/like a given candidate item $i$ . The task aligns with that of CoLLM, which investigates the utilization of collaborative information for recommendation through embedding mapping in latent space. Hence, our experimental setup generally follows that of CoLLM.",
|
| 831 |
+
"bbox": [
|
| 832 |
+
112,
|
| 833 |
+
236,
|
| 834 |
+
490,
|
| 835 |
+
445
|
| 836 |
+
],
|
| 837 |
+
"page_idx": 4
|
| 838 |
+
},
|
| 839 |
+
{
|
| 840 |
+
"type": "text",
|
| 841 |
+
"text": "Datasets. We conduct experiments on two representative datasets:",
|
| 842 |
+
"bbox": [
|
| 843 |
+
112,
|
| 844 |
+
457,
|
| 845 |
+
489,
|
| 846 |
+
487
|
| 847 |
+
],
|
| 848 |
+
"page_idx": 4
|
| 849 |
+
},
|
| 850 |
+
{
|
| 851 |
+
"type": "list",
|
| 852 |
+
"sub_type": "text",
|
| 853 |
+
"list_items": [
|
| 854 |
+
"- ML-1M (Harper and Konstan, 2016): This refers to a widely recognized movie recommendation benchmark dataset, MovieLens-1M $^3$ , provided by GroupLens research. The dataset comprises user ratings for movies and includes textual information for users and items, such as movie titles.",
|
| 855 |
+
"- Amazon-Book (Ni et al., 2019): This pertains to the \"Books\" subset within the renowned Amazon Product Review dataset<sup>4</sup>. This dataset aggregates user reviews of books from Amazon, encompassing both the review score and review comments. Additionally, it includes textual information about the items."
|
| 856 |
+
],
|
| 857 |
+
"bbox": [
|
| 858 |
+
114,
|
| 859 |
+
501,
|
| 860 |
+
489,
|
| 861 |
+
712
|
| 862 |
+
],
|
| 863 |
+
"page_idx": 4
|
| 864 |
+
},
|
| 865 |
+
{
|
| 866 |
+
"type": "text",
|
| 867 |
+
"text": "For dataset processing, we adhere entirely to the setup of CoLLM, encompassing label processing and data selection/splitting methods. The statistics of the processed datasets are presented in Table 2.",
|
| 868 |
+
"bbox": [
|
| 869 |
+
112,
|
| 870 |
+
727,
|
| 871 |
+
487,
|
| 872 |
+
791
|
| 873 |
+
],
|
| 874 |
+
"page_idx": 4
|
| 875 |
+
},
|
| 876 |
+
{
|
| 877 |
+
"type": "text",
|
| 878 |
+
"text": "Compared Methods. In this work, we implement BinLLM with Matrix Factorization (Koren et al., 2009) as the collaborative model in its text-encoding module. To assess the effectiveness of BinLLM, we compare it with four categories of",
|
| 879 |
+
"bbox": [
|
| 880 |
+
112,
|
| 881 |
+
802,
|
| 882 |
+
489,
|
| 883 |
+
883
|
| 884 |
+
],
|
| 885 |
+
"page_idx": 4
|
| 886 |
+
},
|
| 887 |
+
{
|
| 888 |
+
"type": "text",
|
| 889 |
+
"text": "methods: conventional collaborative filtering methods (MF, LightGCN, SASRec, DIN), LLMRec methods without integrating collaborative information (ICL, Prompt4NR, TALLRec), LLMRec methods with integrated collaborative information (PersonPrompt, CoLLM), and methods combining language models and collaborative models (CTRL).",
|
| 890 |
+
"bbox": [
|
| 891 |
+
507,
|
| 892 |
+
84,
|
| 893 |
+
885,
|
| 894 |
+
197
|
| 895 |
+
],
|
| 896 |
+
"page_idx": 4
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"type": "list",
|
| 900 |
+
"sub_type": "text",
|
| 901 |
+
"list_items": [
|
| 902 |
+
"- MF (Koren et al., 2009): This refers to a classic latent factor-based collaborative filtering method — Matrix Factorization.",
|
| 903 |
+
"- LightGCN (He et al., 2020): This is one representative graph-based collaborative filtering method, utilizing graph neural networks to enhance collaborative information modeling.",
|
| 904 |
+
"- SASRec (Kang and McAuley, 2018): This is a representative sequential-based collaborative filtering method that utilizes self-attention for modeling user preferences.",
|
| 905 |
+
"- DIN (Zhou et al., 2019): This is a representative collaborative Click-Through Rate (CTR) model, which employs target-aware attention to activate the most relevant user behaviors, thereby enhancing user interest modeling.",
|
| 906 |
+
"- CTRL (DIN) (Li et al., 2023b): This is a state-of-the-art (SOTA) method for combining language and collaborative models through knowledge distillation. We implement its collaborative model as DIN.",
|
| 907 |
+
"- ICL (Dai et al., 2023a): This is an In-Context Learning-based LLMRec method, which directly asks the original LLM for recommendations.",
|
| 908 |
+
"- Prompt4NR (Zhang and Wang, 2023): This is a state-of-the-art (SOTA) soft prompt tuning-based LLMRec method. Initially designed to leverage the language model (LM), we extend it to utilize LLMs, taking the implementation in CoLLM (Zhang et al., 2023b).",
|
| 909 |
+
"- TALLRec (Bao et al., 2023b): This is a state-of-the-art LLMRec method that aligns LLMs with recommendations through instruction tuning.",
|
| 910 |
+
"- **PersonPrompt** (Li et al., 2023a): This is a LLM-Rec method, which integrates collaborative information by adding new tokens and token embeddings to represent users and items. It could be regarded as a personalized soft-prompt tuning method.",
|
| 911 |
+
"- CoLLM (Zhang et al., 2023b): This is a state-of-the-art LLMRec method that integrates collaborative information by mapping collaborative"
|
| 912 |
+
],
|
| 913 |
+
"bbox": [
|
| 914 |
+
507,
|
| 915 |
+
212,
|
| 916 |
+
882,
|
| 917 |
+
921
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 4
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "page_footnote",
|
| 923 |
+
"text": "<sup>3</sup>https://grouplens.org/datasets/movielens/1m/",
|
| 924 |
+
"bbox": [
|
| 925 |
+
134,
|
| 926 |
+
892,
|
| 927 |
+
478,
|
| 928 |
+
906
|
| 929 |
+
],
|
| 930 |
+
"page_idx": 4
|
| 931 |
+
},
|
| 932 |
+
{
|
| 933 |
+
"type": "page_footnote",
|
| 934 |
+
"text": "<sup>4</sup>https://nijianmo.github.io/amazon/index.html",
|
| 935 |
+
"bbox": [
|
| 936 |
+
136,
|
| 937 |
+
906,
|
| 938 |
+
478,
|
| 939 |
+
919
|
| 940 |
+
],
|
| 941 |
+
"page_idx": 4
|
| 942 |
+
},
|
| 943 |
+
{
|
| 944 |
+
"type": "page_number",
|
| 945 |
+
"text": "9185",
|
| 946 |
+
"bbox": [
|
| 947 |
+
480,
|
| 948 |
+
928,
|
| 949 |
+
519,
|
| 950 |
+
940
|
| 951 |
+
],
|
| 952 |
+
"page_idx": 4
|
| 953 |
+
},
|
| 954 |
+
{
|
| 955 |
+
"type": "table",
|
| 956 |
+
"img_path": "images/c1dac3bd6fdf652d712fdbedc4ce26fca5aff1a968bb498f69a92683ca89eb47.jpg",
|
| 957 |
+
"table_caption": [
|
| 958 |
+
"Table 3: Overall performance comparison on the ML-1M and Amazon-Book datasets. \"Collab.\" denotes collaborative recommendation methods. \"Rel. Imp.\" denotes the relative improvement of BinLLM compared to baselines, averaged over the two metrics."
|
| 959 |
+
],
|
| 960 |
+
"table_footnote": [],
|
| 961 |
+
"table_body": "<table><tr><td colspan=\"2\">Dataset</td><td colspan=\"3\">ML-1M</td><td colspan=\"3\">Amazon-Book</td></tr><tr><td colspan=\"2\">Methods</td><td>AUC</td><td>UAUC</td><td>Rel. Imp.</td><td>AUC</td><td>UAUC</td><td>Rel. Imp.</td></tr><tr><td rowspan=\"4\">Collab.</td><td>MF</td><td>0.6482</td><td>0.6361</td><td>12.9%</td><td>0.7134</td><td>0.5565</td><td>14.7%</td></tr><tr><td>LightGCN</td><td>0.5959</td><td>0.6499</td><td>15.8%</td><td>0.7103</td><td>0.5639</td><td>14.2%</td></tr><tr><td>SASRec</td><td>0.7078</td><td>0.6884</td><td>3.0%</td><td>0.6887</td><td>0.5714</td><td>15.3%</td></tr><tr><td>DIN</td><td>0.7166</td><td>0.6459</td><td>5.6%</td><td>0.8163</td><td>0.6145</td><td>2.0%</td></tr><tr><td>LM+Collab.</td><td>CTRL (DIN)</td><td>0.7159</td><td>0.6492</td><td>5.4%</td><td>0.8202</td><td>0.5996</td><td>3.0%</td></tr><tr><td rowspan=\"3\">LLMRec</td><td>ICL</td><td>0.5320</td><td>0.5268</td><td>35.8%</td><td>0.4820</td><td>0.4856</td><td>50.7%</td></tr><tr><td>Prompt4NR</td><td>0.7071</td><td>0.6739</td><td>4.1%</td><td>0.7224</td><td>0.5881</td><td>10.9%</td></tr><tr><td>TALLRec</td><td>0.7097</td><td>0.6818</td><td>3.3%</td><td>0.7375</td><td>0.5983</td><td>8.2%</td></tr><tr><td rowspan=\"3\">LLMRec+Collab.</td><td>PersonPrompt</td><td>0.7214</td><td>0.6563</td><td>4.5%</td><td>0.7273</td><td>0.5956</td><td>9.9%</td></tr><tr><td>CoLLM-MF</td><td>0.7295</td><td>0.6875</td><td>1.5%</td><td>0.8109</td><td>0.6225</td><td>1.7%</td></tr><tr><td>CoLLM-DIN</td><td>0.7243</td><td>0.6897</td><td>1.7%</td><td>0.8245</td><td>0.6474</td><td>-1.0%</td></tr><tr><td>Ours</td><td>BinLLM</td><td>0.7425</td><td>0.6956</td><td>-</td><td>0.8264</td><td>0.6319</td><td>-</td></tr></table>",
|
| 962 |
+
"bbox": [
|
| 963 |
+
124,
|
| 964 |
+
135,
|
| 965 |
+
870,
|
| 966 |
+
368
|
| 967 |
+
],
|
| 968 |
+
"page_idx": 5
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"text": "embeddings into the latent space of the LLM. We consider two implementations: CoLLM-MF, which utilizes MF to extract collaborative embeddings, and CoLLM-DIN, which uses the DIN to extract collaborative embeddings.",
|
| 973 |
+
"bbox": [
|
| 974 |
+
126,
|
| 975 |
+
391,
|
| 976 |
+
489,
|
| 977 |
+
470
|
| 978 |
+
],
|
| 979 |
+
"page_idx": 5
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "text",
|
| 983 |
+
"text": "Hyper-parameters and Evaluation Metrics. For all methods, we strictly adhere to the hyperparameter settings outlined in the CoLLM paper (Zhang et al., 2023b), with Vicuna-7B used as the employed LLM. It's worth noting that for our method, we set the dimension of the collaborative embeddings (i.e., the length of the binary representations in Equation (2)) to 32 by default. Considering the length is not very large, we choose not to perform compression in our text-like encoding module by default. We tune the hyper-parameters based on the AUC metric on the validation dataset.",
|
| 984 |
+
"bbox": [
|
| 985 |
+
112,
|
| 986 |
+
478,
|
| 987 |
+
489,
|
| 988 |
+
669
|
| 989 |
+
],
|
| 990 |
+
"page_idx": 5
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "text",
|
| 994 |
+
"text": "Regarding evaluation metrics, we employ two widely used metrics for click/rating prediction: AUC (Area under the ROC Curve), which measures the overall prediction accuracy, and UAUC (AUC averaged over users), which provides insights into the ranking quality for users.",
|
| 995 |
+
"bbox": [
|
| 996 |
+
112,
|
| 997 |
+
671,
|
| 998 |
+
489,
|
| 999 |
+
766
|
| 1000 |
+
],
|
| 1001 |
+
"page_idx": 5
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "text",
|
| 1005 |
+
"text": "3.2 Performance Comparison",
|
| 1006 |
+
"text_level": 1,
|
| 1007 |
+
"bbox": [
|
| 1008 |
+
112,
|
| 1009 |
+
777,
|
| 1010 |
+
364,
|
| 1011 |
+
793
|
| 1012 |
+
],
|
| 1013 |
+
"page_idx": 5
|
| 1014 |
+
},
|
| 1015 |
+
{
|
| 1016 |
+
"type": "text",
|
| 1017 |
+
"text": "In this subsection, we initially examine the overall performance of the compared methods and subsequently analyze their performance in warm-start and cold-start scenarios, respectively.",
|
| 1018 |
+
"bbox": [
|
| 1019 |
+
112,
|
| 1020 |
+
797,
|
| 1021 |
+
487,
|
| 1022 |
+
862
|
| 1023 |
+
],
|
| 1024 |
+
"page_idx": 5
|
| 1025 |
+
},
|
| 1026 |
+
{
|
| 1027 |
+
"type": "text",
|
| 1028 |
+
"text": "3.2.1 Overall Performance (RQ1)",
|
| 1029 |
+
"text_level": 1,
|
| 1030 |
+
"bbox": [
|
| 1031 |
+
112,
|
| 1032 |
+
870,
|
| 1033 |
+
393,
|
| 1034 |
+
885
|
| 1035 |
+
],
|
| 1036 |
+
"page_idx": 5
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"type": "text",
|
| 1040 |
+
"text": "We summarize the overall performance of the compared methods in Table 3. From the table, we draw",
|
| 1041 |
+
"bbox": [
|
| 1042 |
+
112,
|
| 1043 |
+
889,
|
| 1044 |
+
489,
|
| 1045 |
+
920
|
| 1046 |
+
],
|
| 1047 |
+
"page_idx": 5
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "image",
|
| 1051 |
+
"img_path": "images/59636b974074ce98dbf73cc1258ba31631506ab3f074957e0d558bdb7ce55a96.jpg",
|
| 1052 |
+
"image_caption": [
|
| 1053 |
+
"(a) ML-1M Warm"
|
| 1054 |
+
],
|
| 1055 |
+
"image_footnote": [],
|
| 1056 |
+
"bbox": [
|
| 1057 |
+
512,
|
| 1058 |
+
397,
|
| 1059 |
+
690,
|
| 1060 |
+
494
|
| 1061 |
+
],
|
| 1062 |
+
"page_idx": 5
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "image",
|
| 1066 |
+
"img_path": "images/47838a3b2ad0d3c7c32f770597f7fde9de804475c7de7c70bcfce060c86f5554.jpg",
|
| 1067 |
+
"image_caption": [
|
| 1068 |
+
"(b) Amazon-book Warm"
|
| 1069 |
+
],
|
| 1070 |
+
"image_footnote": [],
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
699,
|
| 1073 |
+
395,
|
| 1074 |
+
878,
|
| 1075 |
+
493
|
| 1076 |
+
],
|
| 1077 |
+
"page_idx": 5
|
| 1078 |
+
},
|
| 1079 |
+
{
|
| 1080 |
+
"type": "image",
|
| 1081 |
+
"img_path": "images/2062161657eb5541f01ea6dd91bdec4eac63582f816ab03458775b1ecf2fd0d1.jpg",
|
| 1082 |
+
"image_caption": [
|
| 1083 |
+
"(c) ML-1M Cold"
|
| 1084 |
+
],
|
| 1085 |
+
"image_footnote": [],
|
| 1086 |
+
"bbox": [
|
| 1087 |
+
515,
|
| 1088 |
+
525,
|
| 1089 |
+
694,
|
| 1090 |
+
623
|
| 1091 |
+
],
|
| 1092 |
+
"page_idx": 5
|
| 1093 |
+
},
|
| 1094 |
+
{
|
| 1095 |
+
"type": "image",
|
| 1096 |
+
"img_path": "images/11405e45808c159238f14c11aaed41253519435751f404b6afc1f1165c9acf08.jpg",
|
| 1097 |
+
"image_caption": [
|
| 1098 |
+
"(d) Amazon-book Cold",
|
| 1099 |
+
"Figure 2: Performance comparison in warm and cold scenarios on ML-1M and Amazon-Book. The left y-axis represents AUC, while the right one represents UAUC."
|
| 1100 |
+
],
|
| 1101 |
+
"image_footnote": [],
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
702,
|
| 1104 |
+
527,
|
| 1105 |
+
878,
|
| 1106 |
+
621
|
| 1107 |
+
],
|
| 1108 |
+
"page_idx": 5
|
| 1109 |
+
},
|
| 1110 |
+
{
|
| 1111 |
+
"type": "text",
|
| 1112 |
+
"text": "the following observations:",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
507,
|
| 1115 |
+
728,
|
| 1116 |
+
714,
|
| 1117 |
+
743
|
| 1118 |
+
],
|
| 1119 |
+
"page_idx": 5
|
| 1120 |
+
},
|
| 1121 |
+
{
|
| 1122 |
+
"type": "list",
|
| 1123 |
+
"sub_type": "text",
|
| 1124 |
+
"list_items": [
|
| 1125 |
+
"- When compared to baselines, our BinLLM achieves the best performance overall, except when compared to CoLLM-DIN on the UAUC metric. These results confirm the superiority of BinLLM in leveraging both collaborative information and the power of LLMs to achieve better recommendation performance.",
|
| 1126 |
+
"- Comparing LLMRec methods that integrate collaborative information with LLMRec methods that do not consider collaborative information,"
|
| 1127 |
+
],
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
507,
|
| 1130 |
+
756,
|
| 1131 |
+
884,
|
| 1132 |
+
919
|
| 1133 |
+
],
|
| 1134 |
+
"page_idx": 5
|
| 1135 |
+
},
|
| 1136 |
+
{
|
| 1137 |
+
"type": "page_number",
|
| 1138 |
+
"text": "9186",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
478,
|
| 1141 |
+
927,
|
| 1142 |
+
519,
|
| 1143 |
+
940
|
| 1144 |
+
],
|
| 1145 |
+
"page_idx": 5
|
| 1146 |
+
},
|
| 1147 |
+
{
|
| 1148 |
+
"type": "text",
|
| 1149 |
+
"text": "we observe that incorporating collaborative information generally improves performance and enables LLMRec to surpass traditional collaborative and LM-based methods. These results underscore the importance of integrating collaborative information into LLMs for recommendation.",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
127,
|
| 1152 |
+
84,
|
| 1153 |
+
487,
|
| 1154 |
+
179
|
| 1155 |
+
],
|
| 1156 |
+
"page_idx": 6
|
| 1157 |
+
},
|
| 1158 |
+
{
|
| 1159 |
+
"type": "list",
|
| 1160 |
+
"sub_type": "text",
|
| 1161 |
+
"list_items": [
|
| 1162 |
+
"- Comparing BinLLM with existing LLMRec methods that also consider collaborative information, our BinLLM consistently outperforms CoLLM-MF and PersonPrompt. Compared with CoLLM-DIN, BinLLM still achieves better results except for the UAUC metric on Amazon-book. Considering that CoLLM-DIN employs a more advanced collaborative model while BinLLM relies solely on MF, these results confirm that encoding collaborative information in a text-like manner better aligns with LLMs, allowing us to leverage their power for recommendation more effectively.",
|
| 1163 |
+
"- Among LLMRec methods that consider collaborative information, PersonPrompt, which learns token embeddings for users and items from scratch, performs the worst, significantly lagging behind others. This can be attributed to the low learning efficacy resulting from the introduction of additional tokens and token embeddings."
|
| 1164 |
+
],
|
| 1165 |
+
"bbox": [
|
| 1166 |
+
114,
|
| 1167 |
+
184,
|
| 1168 |
+
489,
|
| 1169 |
+
508
|
| 1170 |
+
],
|
| 1171 |
+
"page_idx": 6
|
| 1172 |
+
},
|
| 1173 |
+
{
|
| 1174 |
+
"type": "text",
|
| 1175 |
+
"text": "3.2.2 Warm and Cold Performance",
|
| 1176 |
+
"text_level": 1,
|
| 1177 |
+
"bbox": [
|
| 1178 |
+
112,
|
| 1179 |
+
516,
|
| 1180 |
+
406,
|
| 1181 |
+
530
|
| 1182 |
+
],
|
| 1183 |
+
"page_idx": 6
|
| 1184 |
+
},
|
| 1185 |
+
{
|
| 1186 |
+
"type": "text",
|
| 1187 |
+
"text": "When integrating collaborative information into LLMRec, one consideration is to enhance their warm-start performance, enabling them to achieve good performance in both warm-start and cold-start scenarios. We now investigate the performance in the two scenarios. Specifically, we adhere to the protocol outlined in the CoLLM paper (Zhang et al., 2023b) to partition the testing data into warm data and cold data based on the interaction count of users and items, and subsequently evaluate the model on them. We summarize the results in Figure 2. Here, we compare four representative methods: MF, TALLRec, CoLLM-MF, and BinLLM.",
|
| 1188 |
+
"bbox": [
|
| 1189 |
+
112,
|
| 1190 |
+
533,
|
| 1191 |
+
487,
|
| 1192 |
+
743
|
| 1193 |
+
],
|
| 1194 |
+
"page_idx": 6
|
| 1195 |
+
},
|
| 1196 |
+
{
|
| 1197 |
+
"type": "text",
|
| 1198 |
+
"text": "According to the figure, in the warm scenarios, TALLRec, an LLMRec method without considering collaborative information, performs worse than MF, while both CoLLM and BinLLM outperform MF, with BinLLM being the best. These results indicate that collaborative information is important for warm-start performance, and our text-like encoding has superiority in combining the information with LLMs. In the cold-start scenarios, all LLMRec methods outperform MF, confirming the superiority of LLMRec in cold-start scenar",
|
| 1199 |
+
"bbox": [
|
| 1200 |
+
112,
|
| 1201 |
+
745,
|
| 1202 |
+
489,
|
| 1203 |
+
921
|
| 1204 |
+
],
|
| 1205 |
+
"page_idx": 6
|
| 1206 |
+
},
|
| 1207 |
+
{
|
| 1208 |
+
"type": "table",
|
| 1209 |
+
"img_path": "images/dddaca3eccccc2b32fb9dce8c08973ee283a5842c7d942173742cd4d3900cb27.jpg",
|
| 1210 |
+
"table_caption": [
|
| 1211 |
+
"Table 4: Results of the ablation studies on ML-1M and Amazon-Book, where \"TO\", \"IO\", \"IT\" denote \"Text-Only\", \"ID-Only\", \"Intuitive-Tuning\", respectively."
|
| 1212 |
+
],
|
| 1213 |
+
"table_footnote": [],
|
| 1214 |
+
"table_body": "<table><tr><td>Datasets</td><td colspan=\"2\">ML-1M</td><td colspan=\"2\">Amazon-book</td></tr><tr><td>Methods</td><td>AUC</td><td>UAUC</td><td>AUC</td><td>UAUC</td></tr><tr><td>BinMF</td><td>0.7189</td><td>0.6654</td><td>0.8087</td><td>0.5895</td></tr><tr><td>BinLLM-TO</td><td>0.7097</td><td>0.6818</td><td>0.7375</td><td>0.5983</td></tr><tr><td>BinLLM-IO</td><td>0.7307</td><td>0.6797</td><td>0.8173</td><td>0.5919</td></tr><tr><td>BinLLM-IT</td><td>0.7286</td><td>0.6842</td><td>0.8246</td><td>0.6165</td></tr><tr><td>BinLLM</td><td>0.7425</td><td>0.6956</td><td>0.8264</td><td>0.6319</td></tr></table>",
|
| 1215 |
+
"bbox": [
|
| 1216 |
+
514,
|
| 1217 |
+
135,
|
| 1218 |
+
885,
|
| 1219 |
+
244
|
| 1220 |
+
],
|
| 1221 |
+
"page_idx": 6
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "text",
|
| 1225 |
+
"text": "ios. Moreover, BinLLM enhances the cold-start performance compared to CoLLM in most cases, possibly due to the binarized embeddings having better generalization.",
|
| 1226 |
+
"bbox": [
|
| 1227 |
+
507,
|
| 1228 |
+
267,
|
| 1229 |
+
882,
|
| 1230 |
+
332
|
| 1231 |
+
],
|
| 1232 |
+
"page_idx": 6
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "text",
|
| 1236 |
+
"text": "3.3 In-depth Analyses (RQ2)",
|
| 1237 |
+
"text_level": 1,
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
507,
|
| 1240 |
+
342,
|
| 1241 |
+
752,
|
| 1242 |
+
357
|
| 1243 |
+
],
|
| 1244 |
+
"page_idx": 6
|
| 1245 |
+
},
|
| 1246 |
+
{
|
| 1247 |
+
"type": "text",
|
| 1248 |
+
"text": "In this subsection, we conduct experiments to analyze the influence of BinLLM's different components on its effectiveness.",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
507,
|
| 1251 |
+
363,
|
| 1252 |
+
882,
|
| 1253 |
+
409
|
| 1254 |
+
],
|
| 1255 |
+
"page_idx": 6
|
| 1256 |
+
},
|
| 1257 |
+
{
|
| 1258 |
+
"type": "text",
|
| 1259 |
+
"text": "3.3.1 Ablation Study",
|
| 1260 |
+
"text_level": 1,
|
| 1261 |
+
"bbox": [
|
| 1262 |
+
507,
|
| 1263 |
+
419,
|
| 1264 |
+
690,
|
| 1265 |
+
434
|
| 1266 |
+
],
|
| 1267 |
+
"page_idx": 6
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "text",
|
| 1271 |
+
"text": "We first further verify the benefits of introducing text-like encoding of collaborative information into LLMs. Specifically, we compare the default BinLLM with the following variants: 1) BinMF, which avoids using the LLM but directly utilizes the binary representations for recommendations like MF, 2) BinLLM-TO, which removes the ID field from BinLLM's prompt template, i.e., only using the text information, 3) BinLLM-IO, which removes the text field from BinLLM's prompt, i.e., only using the collaborative information. Additionally, we also study the influence of the two-step tuning by comparing a variant that employs intuitive tuning, denoted by BinLLM-IT. The comparison results are summarized in Table 4.",
|
| 1272 |
+
"bbox": [
|
| 1273 |
+
507,
|
| 1274 |
+
438,
|
| 1275 |
+
882,
|
| 1276 |
+
678
|
| 1277 |
+
],
|
| 1278 |
+
"page_idx": 6
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "text",
|
| 1282 |
+
"text": "From the table, we make the following observations: 1) BinMF underperforms all BinLLM variants that consider collaborative information, confirming the superiority of leveraging LLMs for recommendation. 2) BinLLM-TO underperforms other BinLLM variants, indicating that introducing collaborative information is crucial for enhancing LLMRec performance. 3) BinLLM-IO generally underperforms BinLLM-IT and the default BinLLM, highlighting the importance of considering both textual and collaborative information. Lastly, comparing BinLLM-IT with the default BinLLM, BinLLM-IT consistently performs worse. This verifies our claims about tuning designs: directly tuning LLMs with prompts containing collaborative",
|
| 1283 |
+
"bbox": [
|
| 1284 |
+
507,
|
| 1285 |
+
680,
|
| 1286 |
+
884,
|
| 1287 |
+
921
|
| 1288 |
+
],
|
| 1289 |
+
"page_idx": 6
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "page_number",
|
| 1293 |
+
"text": "9187",
|
| 1294 |
+
"bbox": [
|
| 1295 |
+
480,
|
| 1296 |
+
927,
|
| 1297 |
+
519,
|
| 1298 |
+
940
|
| 1299 |
+
],
|
| 1300 |
+
"page_idx": 6
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "image",
|
| 1304 |
+
"img_path": "images/6e80bb7612c01593b99f29d418a5191d553cb723a0b722c295137c60ee7a3c1e.jpg",
|
| 1305 |
+
"image_caption": [
|
| 1306 |
+
"(a) ML-1M"
|
| 1307 |
+
],
|
| 1308 |
+
"image_footnote": [],
|
| 1309 |
+
"bbox": [
|
| 1310 |
+
117,
|
| 1311 |
+
91,
|
| 1312 |
+
295,
|
| 1313 |
+
186
|
| 1314 |
+
],
|
| 1315 |
+
"page_idx": 7
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "image",
|
| 1319 |
+
"img_path": "images/7fd2ea50231fba419ec3de1c74ec4083af030f3e082da27a1c9b9bd94f64f36d.jpg",
|
| 1320 |
+
"image_caption": [
|
| 1321 |
+
"(b) Amazon-book",
|
| 1322 |
+
"Figure 3: Performance of BinLLM with (w comp.) and without compression (w/o comp.). The left y-axis represents AUC, while the right one represents UAUC."
|
| 1323 |
+
],
|
| 1324 |
+
"image_footnote": [],
|
| 1325 |
+
"bbox": [
|
| 1326 |
+
305,
|
| 1327 |
+
89,
|
| 1328 |
+
482,
|
| 1329 |
+
186
|
| 1330 |
+
],
|
| 1331 |
+
"page_idx": 7
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "text",
|
| 1335 |
+
"text": "information from scratch may lead to underutilization of both textual and collaborative information.",
|
| 1336 |
+
"bbox": [
|
| 1337 |
+
112,
|
| 1338 |
+
291,
|
| 1339 |
+
489,
|
| 1340 |
+
322
|
| 1341 |
+
],
|
| 1342 |
+
"page_idx": 7
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "text",
|
| 1346 |
+
"text": "The influence of text-like encoding method. Taking a further step, we explore how the performance changes when the collaborative information is encoded in alternative textual formats instead of binary sequences. Specifically, we examine a variant called BinLLM-emb-str, which employs UMAP (McInnes et al., 2018) to reduce the dimensionality of the embeddings and converts the results into strings for integration into our prompt. We compare this variant with the original BinLLM and CoLLM on the ML-1M dataset, yielding the following AUC results: 0.7343 for BinLLM-emb-str, 0.7425 for BinLLM, and 0.7243 for CoLLM. As the results indicate, directly converting the original embeddings into strings leads to inferior performance compared to our proposed method. However, it is noteworthy that BinLLM-emb-str still outperforms CoLLM. This finding suggests that encoding collaborative information in text formats is usually advantageous, compared to the method (CoLLM) performed in latent space.",
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
115,
|
| 1349 |
+
329,
|
| 1350 |
+
489,
|
| 1351 |
+
667
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 7
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "text",
|
| 1357 |
+
"text": "3.3.2 The Influence of Compression",
|
| 1358 |
+
"text_level": 1,
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
112,
|
| 1361 |
+
677,
|
| 1362 |
+
410,
|
| 1363 |
+
692
|
| 1364 |
+
],
|
| 1365 |
+
"page_idx": 7
|
| 1366 |
+
},
|
| 1367 |
+
{
|
| 1368 |
+
"type": "text",
|
| 1369 |
+
"text": "In the preceding experiments, we did not use compression for our text-like encoding of collaborative information by default. Here, we conduct experiments to study its influence by comparing BinLLM with compression (w comp.) and without compression (w/o comp.). The comparison results of recommendation performance are summarized in Figure 3. According to the figure, BinLLM with compression generally shows comparable performance to BinLLM without compression. Moreover, when compared with baselines, the comparison trends are similar to BinLLM without compression (with only some differences observed for the UAUC metric on the ML-1M dataset when com",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
112,
|
| 1372 |
+
696,
|
| 1373 |
+
490,
|
| 1374 |
+
920
|
| 1375 |
+
],
|
| 1376 |
+
"page_idx": 7
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "text",
|
| 1380 |
+
"text": "pared with CoLLM). These results indicate that compression can reduce the representation length while maintaining performance to a large extent.",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
507,
|
| 1383 |
+
84,
|
| 1384 |
+
880,
|
| 1385 |
+
131
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 7
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "As shown in the example of Equation (4), the dot-decimal notation can compress the length of collaborative representation by approximately 2.5 times<sup>5</sup>. However, in our experiments, the inference acceleration did not reach this level. This is because we only included the collaborative representations for the target user and items, which constitute a smaller part of the total prompt. Specifically, the inference time for BinLLM without compression and with compression was 106s and 93s on ML-1M, and 483s and 435s on Amazon, respectively. If considering collaborative information for all historically interacted items, as done by Liao et al. (2024), the expected inference acceleration would be more significant.",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
507,
|
| 1394 |
+
133,
|
| 1395 |
+
884,
|
| 1396 |
+
374
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 7
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "4 Related Work",
|
| 1403 |
+
"text_level": 1,
|
| 1404 |
+
"bbox": [
|
| 1405 |
+
507,
|
| 1406 |
+
387,
|
| 1407 |
+
665,
|
| 1408 |
+
401
|
| 1409 |
+
],
|
| 1410 |
+
"page_idx": 7
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "text",
|
| 1414 |
+
"text": "- Collaborative Information Modeling. Collaborative information modeling is pivotal for personalized recommendations, and significant efforts have been dedicated to this area in traditional research. Initially, the information modeling relied on statistical methods (Sarwar et al., 2001). Subsequently, latent factor models became prevalent, leading to the development of prominent models such as MF (Koren et al., 2009) and FISM (Kabbur et al., 2013). Later, neural network-enhanced latent factor models made substantial advancements (He et al., 2017; Tang and Wang, 2018; Hidasi et al., 2016; Chen et al., 2023). These studies achieved remarkable success in both academia and industry, inspiring exploration into collaborative information modeling for LLMRec. In this study, we propose a method to encode collaborative information in a text-like format, making it suitable for LLM usage.",
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
507,
|
| 1417 |
+
414,
|
| 1418 |
+
884,
|
| 1419 |
+
702
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 7
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"text": "- LLMRec. As the impressive capabilities exhibited by LLMs, an increasing number of researchers in the recommendation community are now exploring the potential of applying LLMs to recommendation systems (Wu et al., 2023; Lin et al., 2023a; Li et al., 2024). This exploration can be categorized into two groups. The first group focuses on directly harnessing the abilities of LLMs by employing suitable prompts to stimulate their performance in recommendation scenarios (Dai et al., 2023b; Hou et al., 2024; Shi et al., 2024). On the other hand, another group of researchers argues that LLMs have",
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
507,
|
| 1428 |
+
703,
|
| 1429 |
+
882,
|
| 1430 |
+
896
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 7
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "page_footnote",
|
| 1436 |
+
"text": "Ignoring the dot \".in the sequence.",
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
529,
|
| 1439 |
+
906,
|
| 1440 |
+
757,
|
| 1441 |
+
920
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 7
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "page_number",
|
| 1447 |
+
"text": "9188",
|
| 1448 |
+
"bbox": [
|
| 1449 |
+
480,
|
| 1450 |
+
927,
|
| 1451 |
+
519,
|
| 1452 |
+
940
|
| 1453 |
+
],
|
| 1454 |
+
"page_idx": 7
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "text",
|
| 1458 |
+
"text": "limited exposure to recommendation tasks during pre-training, and recommendation data often possess personalized characteristics (Bao et al., 2023b; Zhang et al., 2023a). Consequently, it becomes crucial to explore tuning methods that can enhance the recommendation performance of LLMs (Lin et al., 2024a; Zheng et al., 2024b; Lin et al., 2024b, 2023b). As researchers delve deeper into their studies, it has been discovered that LLMs often exhibit an excessive reliance on semantic knowledge for learning, while paying insufficient attention to the acquisition of collaborative information between entities (Bao et al., 2023a).",
|
| 1459 |
+
"bbox": [
|
| 1460 |
+
112,
|
| 1461 |
+
84,
|
| 1462 |
+
492,
|
| 1463 |
+
294
|
| 1464 |
+
],
|
| 1465 |
+
"page_idx": 8
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "text",
|
| 1469 |
+
"text": "Researchers have initiated endeavors to incorporate collaborative information into LLMs. Some researchers attempt to look for ID encoding methods to introduce new tokens through vocabulary expansion and train these tokens from scratch (Zheng et al., 2024a; Hua et al., 2023; Rajput et al., 2023; Zhu et al., 2024). Among them, Hua et al. utilize statistical information, Zheng et al. and Rajput et al. employ vector quantization techniques. However, this approach often faces with low learning efficacy. Another group of researchers explores using a latent factor model to capture collaborative information (Zhang et al., 2023b; Li et al., 2023c; Liao et al., 2024), which is then mapped onto the semantic space of LLMs through a mapping layer. This method exhibits better learning efficacy but requires additional training of the mapping layer. Moreover, due to the non-text-like format of collaborative information, both sets of methods face challenges in aligning with the information processing mechanism in LLMs, limiting their performance.",
|
| 1470 |
+
"bbox": [
|
| 1471 |
+
115,
|
| 1472 |
+
294,
|
| 1473 |
+
490,
|
| 1474 |
+
632
|
| 1475 |
+
],
|
| 1476 |
+
"page_idx": 8
|
| 1477 |
+
},
|
| 1478 |
+
{
|
| 1479 |
+
"type": "text",
|
| 1480 |
+
"text": "5 Conclusion",
|
| 1481 |
+
"text_level": 1,
|
| 1482 |
+
"bbox": [
|
| 1483 |
+
112,
|
| 1484 |
+
645,
|
| 1485 |
+
247,
|
| 1486 |
+
659
|
| 1487 |
+
],
|
| 1488 |
+
"page_idx": 8
|
| 1489 |
+
},
|
| 1490 |
+
{
|
| 1491 |
+
"type": "text",
|
| 1492 |
+
"text": "In this study, we emphasize the importance of text-like encoding of collaborative information modeling to enhance recommendation performance for LLMRec. We introduce BinLLM, a novel approach designed to incorporate collaborative information in a text-like format by binarizing collaborative embeddings for LLMRec. This encoding allows the collaborative information to be utilized in a manner better aligned with how information is processed in LLMs. Extensive results demonstrate the superiority of BinLLM.",
|
| 1493 |
+
"bbox": [
|
| 1494 |
+
112,
|
| 1495 |
+
671,
|
| 1496 |
+
489,
|
| 1497 |
+
850
|
| 1498 |
+
],
|
| 1499 |
+
"page_idx": 8
|
| 1500 |
+
},
|
| 1501 |
+
{
|
| 1502 |
+
"type": "text",
|
| 1503 |
+
"text": "Limitations",
|
| 1504 |
+
"text_level": 1,
|
| 1505 |
+
"bbox": [
|
| 1506 |
+
112,
|
| 1507 |
+
862,
|
| 1508 |
+
220,
|
| 1509 |
+
878
|
| 1510 |
+
],
|
| 1511 |
+
"page_idx": 8
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "text",
|
| 1515 |
+
"text": "Currently, this paper has certain limitations in experimental validation: 1) It relies solely on Vicuna-",
|
| 1516 |
+
"bbox": [
|
| 1517 |
+
112,
|
| 1518 |
+
889,
|
| 1519 |
+
489,
|
| 1520 |
+
921
|
| 1521 |
+
],
|
| 1522 |
+
"page_idx": 8
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "text",
|
| 1526 |
+
"text": "7B for experiments; 2) The current experiments focus solely on rating/click prediction tasks, neglecting other recommendation tasks like next-item prediction. In the future, we aim to expand experiments accordingly. Additionally, at the methodological level, similar to existing LLMRec methods, this paper faces challenges with low inference efficiency for real-world recommendation scenarios, particularly in the all-ranking setting. In the future, we could explore applying existing acceleration methods like pruning to improve speed. Moreover, exploring recommendation generation methods that avoid multiple inferences for individual users is another avenue worth exploring.",
|
| 1527 |
+
"bbox": [
|
| 1528 |
+
507,
|
| 1529 |
+
84,
|
| 1530 |
+
885,
|
| 1531 |
+
311
|
| 1532 |
+
],
|
| 1533 |
+
"page_idx": 8
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "text",
|
| 1537 |
+
"text": "Ethical Considerations",
|
| 1538 |
+
"text_level": 1,
|
| 1539 |
+
"bbox": [
|
| 1540 |
+
509,
|
| 1541 |
+
323,
|
| 1542 |
+
710,
|
| 1543 |
+
338
|
| 1544 |
+
],
|
| 1545 |
+
"page_idx": 8
|
| 1546 |
+
},
|
| 1547 |
+
{
|
| 1548 |
+
"type": "text",
|
| 1549 |
+
"text": "In this paper, we present BinLLM, designed to encode collaborative information in a text-like format for LLMRec. Our method binarizes numerical embeddings and thus doesn't raise ethical concerns. Moreover, the data we use are publicly available and don't include sensitive details like gender. However, recommendations involve user behavioral data, which might raise privacy concerns, which are addressable through introducing the mechanism of user consent. Additionally, using LLMs may have hidden negative societal biases. We advocate for conducting thorough risk assessments and advise users to be wary of potential risks linked with model usage.",
|
| 1550 |
+
"bbox": [
|
| 1551 |
+
507,
|
| 1552 |
+
351,
|
| 1553 |
+
885,
|
| 1554 |
+
576
|
| 1555 |
+
],
|
| 1556 |
+
"page_idx": 8
|
| 1557 |
+
},
|
| 1558 |
+
{
|
| 1559 |
+
"type": "text",
|
| 1560 |
+
"text": "Acknowledgments",
|
| 1561 |
+
"text_level": 1,
|
| 1562 |
+
"bbox": [
|
| 1563 |
+
509,
|
| 1564 |
+
590,
|
| 1565 |
+
672,
|
| 1566 |
+
607
|
| 1567 |
+
],
|
| 1568 |
+
"page_idx": 8
|
| 1569 |
+
},
|
| 1570 |
+
{
|
| 1571 |
+
"type": "text",
|
| 1572 |
+
"text": "This work is supported by the National Key Research and Development Program of China (2022YFB3104701) and the National Natural Science Foundation of China (62272437).",
|
| 1573 |
+
"bbox": [
|
| 1574 |
+
507,
|
| 1575 |
+
618,
|
| 1576 |
+
884,
|
| 1577 |
+
682
|
| 1578 |
+
],
|
| 1579 |
+
"page_idx": 8
|
| 1580 |
+
},
|
| 1581 |
+
{
|
| 1582 |
+
"type": "text",
|
| 1583 |
+
"text": "References",
|
| 1584 |
+
"text_level": 1,
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
510,
|
| 1587 |
+
711,
|
| 1588 |
+
608,
|
| 1589 |
+
726
|
| 1590 |
+
],
|
| 1591 |
+
"page_idx": 8
|
| 1592 |
+
},
|
| 1593 |
+
{
|
| 1594 |
+
"type": "list",
|
| 1595 |
+
"sub_type": "ref_text",
|
| 1596 |
+
"list_items": [
|
| 1597 |
+
"Fadi Abusafat, Tiago Pereira, and Henrique Santos. 2021. Roadmap of security threats between ipv4/ipv6. In 2021 IEEE International IOT, Electronics and Mechatronics Conference, pages 1-6. IEEE.",
|
| 1598 |
+
"Keqin Bao, Jizhi Zhang, Wenjie Wang, Yang Zhang, Zhengyi Yang, Yancheng Luo, Fuli Feng, Xiangnaan He, and Qi Tian. 2023a. A bi-step grounding paradigm for large language models in recommendation systems. arXiv preprint arXiv:2308.08434.",
|
| 1599 |
+
"Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023b. Tallrec: An effective and efficient tuning framework to align large"
|
| 1600 |
+
],
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
509,
|
| 1603 |
+
734,
|
| 1604 |
+
885,
|
| 1605 |
+
921
|
| 1606 |
+
],
|
| 1607 |
+
"page_idx": 8
|
| 1608 |
+
},
|
| 1609 |
+
{
|
| 1610 |
+
"type": "page_number",
|
| 1611 |
+
"text": "9189",
|
| 1612 |
+
"bbox": [
|
| 1613 |
+
480,
|
| 1614 |
+
927,
|
| 1615 |
+
519,
|
| 1616 |
+
940
|
| 1617 |
+
],
|
| 1618 |
+
"page_idx": 8
|
| 1619 |
+
},
|
| 1620 |
+
{
|
| 1621 |
+
"type": "list",
|
| 1622 |
+
"sub_type": "ref_text",
|
| 1623 |
+
"list_items": [
|
| 1624 |
+
"language model with recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1007-1014.",
|
| 1625 |
+
"Keqin Bao, Jizhi Zhang, Yang Zhang, Wang Wenjie, Fuli Feng, and Xiangnan He. 2023c. Large language models for recommendation: Progresses and future directions. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region, pages 306-309.",
|
| 1626 |
+
"Lei Chen, Le Wu, Kun Zhang, Richang Hong, Defu Lian, Zhiqiang Zhang, Jun Zhou, and Meng Wang. 2023. Improving recommendation fairness via data augmentation. In Proceedings of the ACM Web Conference 2023, page 1012-1020.",
|
| 1627 |
+
"Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. 2023a. Uncovering chatgpt's capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, page 1126-1132.",
|
| 1628 |
+
"Sunhao Dai et al. 2023b. Uncovering chatgpt's capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1126-1132. ACM.",
|
| 1629 |
+
"Gregoire Deletang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, Marcus Hutter, and Joel Veness. 2024. Language modeling is compression. In The Twelfth International Conference on Learning Representations.",
|
| 1630 |
+
"F. Maxwell Harper and Joseph A. Konstan. 2016. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4):19:1-19:19.",
|
| 1631 |
+
"Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, and Marios Fragkoulis. 2023. Leveraging large language models for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1096-1102.",
|
| 1632 |
+
"Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-Dong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 639-648.",
|
| 1633 |
+
"Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, pages 173-182.",
|
| 1634 |
+
"Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016. Session-based recommendations with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016."
|
| 1635 |
+
],
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
115,
|
| 1638 |
+
85,
|
| 1639 |
+
487,
|
| 1640 |
+
919
|
| 1641 |
+
],
|
| 1642 |
+
"page_idx": 9
|
| 1643 |
+
},
|
| 1644 |
+
{
|
| 1645 |
+
"type": "list",
|
| 1646 |
+
"sub_type": "ref_text",
|
| 1647 |
+
"list_items": [
|
| 1648 |
+
"Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2024. Large language models are zero-shot rankers for recommender systems. In European Conference on Information Retrieval, pages 364-381. Springer.",
|
| 1649 |
+
"Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations.",
|
| 1650 |
+
"Wenyue Hua, Shuyuan Xu, Yingqiang Ge, and Yongfeng Zhang. 2023. How to index item ids for recommendation foundation models. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region, page 195-204.",
|
| 1651 |
+
"Santosh Kabbur, Xia Ning, and George Karypis. 2013. Fism: factored item similarity models for top-n recommender systems. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 659-667. Association for Computing Machinery.",
|
| 1652 |
+
"Wang-Cheng Kang and Julian J. McAuley. 2018. Self-attentive sequential recommendation. In IEEE International Conference on Data Mining, ICDM 2018, Singapore, November 17-20, 2018, pages 197-206. IEEE Computer Society.",
|
| 1653 |
+
"Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37.",
|
| 1654 |
+
"Lei Li, Yongfeng Zhang, and Li Chen. 2023a. Personalized prompt learning for explainable recommendation. ACM Trans. Inf. Syst., 41(4).",
|
| 1655 |
+
"Xiangyang Li, Bo Chen, Lu Hou, and Ruiming Tang. 2023b. Ctrl: Connect tabular and language model for ctr prediction. arXiv preprint arXiv:2306.02841.",
|
| 1656 |
+
"Xinhang Li, Chong Chen, Xiangyu Zhao, Yong Zhang, and Chunxiao Xing. 2023c. E4srec: An elegant effective efficient extensible solution of large language models for sequential recommendation. arXiv preprint arXiv:2312.02443.",
|
| 1657 |
+
"Yongqi Li, Xinyu Lin, Wenjie Wang, Fuli Feng, Liang Pang, Wenjie Li, Liqiang Nie, Xiangnan He, and TatSeng Chua. 2024. A survey of generative search and recommendation in the era of large language models. arXiv preprint arXiv:2404.16924.",
|
| 1658 |
+
"Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang, and Xiangnan He. 2024. Llara: Large language-recommendation assistant.",
|
| 1659 |
+
"Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, et al. 2023a. How can recommender systems benefit from large language models: A survey. arXiv preprint arXiv:2306.05817."
|
| 1660 |
+
],
|
| 1661 |
+
"bbox": [
|
| 1662 |
+
510,
|
| 1663 |
+
85,
|
| 1664 |
+
880,
|
| 1665 |
+
919
|
| 1666 |
+
],
|
| 1667 |
+
"page_idx": 9
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"type": "page_number",
|
| 1671 |
+
"text": "9190",
|
| 1672 |
+
"bbox": [
|
| 1673 |
+
480,
|
| 1674 |
+
928,
|
| 1675 |
+
519,
|
| 1676 |
+
940
|
| 1677 |
+
],
|
| 1678 |
+
"page_idx": 9
|
| 1679 |
+
},
|
| 1680 |
+
{
|
| 1681 |
+
"type": "list",
|
| 1682 |
+
"sub_type": "ref_text",
|
| 1683 |
+
"list_items": [
|
| 1684 |
+
"Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming Tang, Yong Yu, and Weinan Zhang. 2024a. Rella: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recommendation. In Proceedings of the ACM on Web Conference 2024, pages 3497-3508.",
|
| 1685 |
+
"Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, See-Kiong Ng, and Tat-Seng Chua. 2023b. A multi-facet paradigm to bridge large language model and recommendation. arXiv preprint arXiv:2310.06491.",
|
| 1686 |
+
"Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, and Tat-Seng Chua. 2024b. Data-efficient fine-tuning for llm-based recommendation. arXiv preprint arXiv:2401.17197.",
|
| 1687 |
+
"Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. 2018. Umap: Uniform manifold approximation and projection. Journal of Open Source Software, 3(29):861.",
|
| 1688 |
+
"Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188-197.",
|
| 1689 |
+
"Larry L Peterson and Bruce S Davie. 2007. Computer networks: a systems approach. Elsevier.",
|
| 1690 |
+
"Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan Hulikal Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Q. Tran, Jonah Samost, Maciej Kula, Ed H. Chi, and Maheswaran Sathiamoorthy. 2023. Recommender systems with generative retrieval. In Thirty-seventh Conference on Neural Information Processing Systems.",
|
| 1691 |
+
"Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pages 285-295.",
|
| 1692 |
+
"Jaromir Savelka, Arav Agarwal, Marshall An, Chris Bogart, and Majd Sakr. 2023. Thrilled by your progress! large language models (gpt-4) no longer struggle to pass assessments in higher education programming courses. In Proceedings of the 2023 ACM Conference on International Computing Education Research, page 78–92.",
|
| 1693 |
+
"Wentao Shi, Xiangnan He, Yang Zhang, Chongming Gao, Xinyue Li, Jizhi Zhang, Qifan Wang, and Fuli Feng. 2024. Large language models are learnable planners for long-term recommendation. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval.",
|
| 1694 |
+
"Qiaoyu Tan, Ninghao Liu, Xing Zhao, Hongxia Yang, Jingren Zhou, and Xia Hu. 2020. Learning to hash"
|
| 1695 |
+
],
|
| 1696 |
+
"bbox": [
|
| 1697 |
+
115,
|
| 1698 |
+
85,
|
| 1699 |
+
489,
|
| 1700 |
+
920
|
| 1701 |
+
],
|
| 1702 |
+
"page_idx": 10
|
| 1703 |
+
},
|
| 1704 |
+
{
|
| 1705 |
+
"type": "list",
|
| 1706 |
+
"sub_type": "ref_text",
|
| 1707 |
+
"list_items": [
|
| 1708 |
+
"with graph neural networks for recommender systems. In Proceedings of The Web Conference 2020, page 1988-1998.",
|
| 1709 |
+
"Jiaxi Tang and Ke Wang. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 565-573.",
|
| 1710 |
+
"Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Llmrec: Large language models with graph augmentation for recommendation. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pages 806-815.",
|
| 1711 |
+
"Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, et al. 2023. A survey on large language models for recommendation. arXiv preprint arXiv:2305.19860.",
|
| 1712 |
+
"Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji-Rong Wen. 2023a. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001.",
|
| 1713 |
+
"Yang Zhang, Fuli Feng, Jizhi Zhang, Keqin Bao, Qifan Wang, and Xiangnan He. 2023b. Collm: Integrating collaborative embeddings into large language models for recommendation. arXiv preprint arXiv:2310.19488.",
|
| 1714 |
+
"Zizhuo Zhang and Bang Wang. 2023. Prompt learning for news recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 227-237.",
|
| 1715 |
+
"Bowen Zheng, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, and Ji-Rong Wen. 2024a. Adapting large language models by integrating collaborative semantics for recommendation. In IEEE ICDE 2024.",
|
| 1716 |
+
"Zhi Zheng, Wenshuo Chao, Zhaopeng Qiu, Hengshu Zhu, and Hui Xiong. 2024b. Harnessing large language models for text-rich sequential recommendation. In Proceedings of the ACM on Web Conference 2024, pages 3207-3216.",
|
| 1717 |
+
"Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu, and Kun Gai. 2019. Deep interest evolution network for click-through rate prediction. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, pages 5941-5948. AAAI Press.",
|
| 1718 |
+
"Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, and Jundong Li. 2024. Collaborative large language model for recommender systems. In Proceedings of the ACM on Web Conference 2024, page 3162-3172."
|
| 1719 |
+
],
|
| 1720 |
+
"bbox": [
|
| 1721 |
+
510,
|
| 1722 |
+
85,
|
| 1723 |
+
882,
|
| 1724 |
+
883
|
| 1725 |
+
],
|
| 1726 |
+
"page_idx": 10
|
| 1727 |
+
},
|
| 1728 |
+
{
|
| 1729 |
+
"type": "page_number",
|
| 1730 |
+
"text": "9191",
|
| 1731 |
+
"bbox": [
|
| 1732 |
+
480,
|
| 1733 |
+
928,
|
| 1734 |
+
517,
|
| 1735 |
+
940
|
| 1736 |
+
],
|
| 1737 |
+
"page_idx": 10
|
| 1738 |
+
}
|
| 1739 |
+
]
|