Chelsea707 commited on
Commit
0065340
·
verified ·
1 Parent(s): 7e1002e

Add Batch e7d120a6-e8f4-4355-a733-b130f26c142f data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/f97898a2-2851-402f-a099-0d9e8bee2d55_content_list.json +0 -0
  3. 2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/f97898a2-2851-402f-a099-0d9e8bee2d55_model.json +0 -0
  4. 2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/f97898a2-2851-402f-a099-0d9e8bee2d55_origin.pdf +3 -0
  5. 2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/full.md +0 -0
  6. 2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/images.zip +3 -0
  7. 2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/layout.json +0 -0
  8. 2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/03c637d4-7f7c-452b-825b-cd37087e895b_content_list.json +1729 -0
  9. 2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/03c637d4-7f7c-452b-825b-cd37087e895b_model.json +1916 -0
  10. 2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/03c637d4-7f7c-452b-825b-cd37087e895b_origin.pdf +3 -0
  11. 2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/full.md +312 -0
  12. 2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/images.zip +3 -0
  13. 2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/layout.json +0 -0
  14. 2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/cf0b9bfd-fca4-45d1-9d19-70a61385e078_content_list.json +0 -0
  15. 2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/cf0b9bfd-fca4-45d1-9d19-70a61385e078_model.json +0 -0
  16. 2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/cf0b9bfd-fca4-45d1-9d19-70a61385e078_origin.pdf +3 -0
  17. 2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/full.md +634 -0
  18. 2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/images.zip +3 -0
  19. 2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/layout.json +0 -0
  20. 2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/b14c9fdc-56df-4bf3-b892-7ce85722bb2a_content_list.json +0 -0
  21. 2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/b14c9fdc-56df-4bf3-b892-7ce85722bb2a_model.json +0 -0
  22. 2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/b14c9fdc-56df-4bf3-b892-7ce85722bb2a_origin.pdf +3 -0
  23. 2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/full.md +573 -0
  24. 2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/images.zip +3 -0
  25. 2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/layout.json +0 -0
  26. 2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/42bedb8b-dcb2-403e-921a-8f9f3747a4f4_content_list.json +0 -0
  27. 2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/42bedb8b-dcb2-403e-921a-8f9f3747a4f4_model.json +0 -0
  28. 2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/42bedb8b-dcb2-403e-921a-8f9f3747a4f4_origin.pdf +3 -0
  29. 2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/full.md +0 -0
  30. 2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/images.zip +3 -0
  31. 2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/layout.json +0 -0
  32. 2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/e5ae8e91-b92f-4f53-8664-78e078ceb4ef_content_list.json +0 -0
  33. 2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/e5ae8e91-b92f-4f53-8664-78e078ceb4ef_model.json +0 -0
  34. 2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/e5ae8e91-b92f-4f53-8664-78e078ceb4ef_origin.pdf +3 -0
  35. 2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/full.md +482 -0
  36. 2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/images.zip +3 -0
  37. 2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/layout.json +0 -0
  38. 2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/80295725-d200-4ec3-8d76-7e55ad235eea_content_list.json +0 -0
  39. 2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/80295725-d200-4ec3-8d76-7e55ad235eea_model.json +0 -0
  40. 2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/80295725-d200-4ec3-8d76-7e55ad235eea_origin.pdf +3 -0
  41. 2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/full.md +710 -0
  42. 2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/images.zip +3 -0
  43. 2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/layout.json +0 -0
  44. 2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/729add8e-e0d4-485c-88d2-ddbf2ab27215_content_list.json +0 -0
  45. 2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/729add8e-e0d4-485c-88d2-ddbf2ab27215_model.json +0 -0
  46. 2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/729add8e-e0d4-485c-88d2-ddbf2ab27215_origin.pdf +3 -0
  47. 2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/full.md +0 -0
  48. 2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/images.zip +3 -0
  49. 2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/layout.json +0 -0
  50. 2025/A Multi-persona Framework for Argument Quality Assessment/2481c86c-70af-47d5-be02-6418fbf3c386_content_list.json +0 -0
.gitattributes CHANGED
@@ -1397,3 +1397,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
1397
  2025/What[[:space:]]Language[[:space:]]Do[[:space:]]Non-English-Centric[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Think[[:space:]]in_/6eaf920e-b203-49ea-a9d6-84d6cd602848_origin.pdf filter=lfs diff=lfs merge=lfs -text
1398
  2025/What[[:space:]]is[[:space:]]in[[:space:]]a[[:space:]]name_[[:space:]]Mitigating[[:space:]]Name[[:space:]]Bias[[:space:]]in[[:space:]]Text[[:space:]]Embedding[[:space:]]Similarity[[:space:]]via[[:space:]]Anonymization/035414c8-edfd-4fbb-9cfd-6719b3ea73d2_origin.pdf filter=lfs diff=lfs merge=lfs -text
1399
  2025/When[[:space:]]Benchmarks[[:space:]]Talk_[[:space:]]Re-Evaluating[[:space:]]Code[[:space:]]LLMs[[:space:]]with[[:space:]]Interactive[[:space:]]Feedback/e3568129-d43e-4d1c-ba4b-74aeef1bc257_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1397
  2025/What[[:space:]]Language[[:space:]]Do[[:space:]]Non-English-Centric[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Think[[:space:]]in_/6eaf920e-b203-49ea-a9d6-84d6cd602848_origin.pdf filter=lfs diff=lfs merge=lfs -text
1398
  2025/What[[:space:]]is[[:space:]]in[[:space:]]a[[:space:]]name_[[:space:]]Mitigating[[:space:]]Name[[:space:]]Bias[[:space:]]in[[:space:]]Text[[:space:]]Embedding[[:space:]]Similarity[[:space:]]via[[:space:]]Anonymization/035414c8-edfd-4fbb-9cfd-6719b3ea73d2_origin.pdf filter=lfs diff=lfs merge=lfs -text
1399
  2025/When[[:space:]]Benchmarks[[:space:]]Talk_[[:space:]]Re-Evaluating[[:space:]]Code[[:space:]]LLMs[[:space:]]with[[:space:]]Interactive[[:space:]]Feedback/e3568129-d43e-4d1c-ba4b-74aeef1bc257_origin.pdf filter=lfs diff=lfs merge=lfs -text
1400
+ 2025/(RSA)²_[[:space:]]A[[:space:]]Rhetorical-Strategy-Aware[[:space:]]Rational[[:space:]]Speech[[:space:]]Act[[:space:]]Framework[[:space:]]for[[:space:]]Figurative[[:space:]]Language[[:space:]]Understanding/f97898a2-2851-402f-a099-0d9e8bee2d55_origin.pdf filter=lfs diff=lfs merge=lfs -text
1401
+ 2025/500xCompressor_[[:space:]]Generalized[[:space:]]Prompt[[:space:]]Compression[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models/03c637d4-7f7c-452b-825b-cd37087e895b_origin.pdf filter=lfs diff=lfs merge=lfs -text
1402
+ 2025/A[[:space:]]Drop-In[[:space:]]Solution[[:space:]]for[[:space:]]On-the-Fly[[:space:]]Adaptation[[:space:]]of[[:space:]]Speculative[[:space:]]Decoding[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/cf0b9bfd-fca4-45d1-9d19-70a61385e078_origin.pdf filter=lfs diff=lfs merge=lfs -text
1403
+ 2025/A[[:space:]]Dual-Mind[[:space:]]Framework[[:space:]]for[[:space:]]Strategic[[:space:]]and[[:space:]]Expressive[[:space:]]Negotiation[[:space:]]Agent/b14c9fdc-56df-4bf3-b892-7ce85722bb2a_origin.pdf filter=lfs diff=lfs merge=lfs -text
1404
+ 2025/A[[:space:]]Dual-Perspective[[:space:]]NLG[[:space:]]Meta-Evaluation[[:space:]]Framework[[:space:]]with[[:space:]]Automatic[[:space:]]Benchmark[[:space:]]and[[:space:]]Better[[:space:]]Interpretability/42bedb8b-dcb2-403e-921a-8f9f3747a4f4_origin.pdf filter=lfs diff=lfs merge=lfs -text
1405
+ 2025/A[[:space:]]Generative[[:space:]]Adaptive[[:space:]]Replay[[:space:]]Continual[[:space:]]Learning[[:space:]]Model[[:space:]]for[[:space:]]Temporal[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Reasoning/e5ae8e91-b92f-4f53-8664-78e078ceb4ef_origin.pdf filter=lfs diff=lfs merge=lfs -text
1406
+ 2025/A[[:space:]]Modular[[:space:]]Approach[[:space:]]for[[:space:]]Clinical[[:space:]]SLMs[[:space:]]Driven[[:space:]]by[[:space:]]Synthetic[[:space:]]Data[[:space:]]with[[:space:]]Pre-Instruction[[:space:]]Tuning,[[:space:]]Model[[:space:]]Merging,[[:space:]]and[[:space:]]Clinical-Tasks[[:space:]]Alignment/80295725-d200-4ec3-8d76-7e55ad235eea_origin.pdf filter=lfs diff=lfs merge=lfs -text
1407
+ 2025/A[[:space:]]Multi-Agent[[:space:]]Framework[[:space:]]for[[:space:]]Mitigating[[:space:]]Dialect[[:space:]]Biases[[:space:]]in[[:space:]]Privacy[[:space:]]Policy[[:space:]]Question-Answering[[:space:]]Systems/729add8e-e0d4-485c-88d2-ddbf2ab27215_origin.pdf filter=lfs diff=lfs merge=lfs -text
1408
+ 2025/A[[:space:]]Multi-persona[[:space:]]Framework[[:space:]]for[[:space:]]Argument[[:space:]]Quality[[:space:]]Assessment/2481c86c-70af-47d5-be02-6418fbf3c386_origin.pdf filter=lfs diff=lfs merge=lfs -text
1409
+ 2025/A[[:space:]]Mutual[[:space:]]Information[[:space:]]Perspective[[:space:]]on[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Embedding/612dfd43-1891-42e4-8e4d-f33d6da03cd5_origin.pdf filter=lfs diff=lfs merge=lfs -text
1410
+ 2025/A[[:space:]]New[[:space:]]Formulation[[:space:]]of[[:space:]]Zipf’s[[:space:]]Meaning-Frequency[[:space:]]Law[[:space:]]through[[:space:]]Contextual[[:space:]]Diversity/0bcbc52a-4518-4b55-a2a3-23170e8a79d0_origin.pdf filter=lfs diff=lfs merge=lfs -text
1411
+ 2025/A[[:space:]]Parameter-Efficient[[:space:]]and[[:space:]]Fine-Grained[[:space:]]Prompt[[:space:]]Learning[[:space:]]for[[:space:]]Vision-Language[[:space:]]Models/93a32327-6764-4a6a-beee-5cd7905b9041_origin.pdf filter=lfs diff=lfs merge=lfs -text
1412
+ 2025/A[[:space:]]Reality[[:space:]]Check[[:space:]]on[[:space:]]Context[[:space:]]Utilisation[[:space:]]for[[:space:]]Retrieval-Augmented[[:space:]]Generation/35e7145c-532d-4725-a09a-595182a41aba_origin.pdf filter=lfs diff=lfs merge=lfs -text
1413
+ 2025/A[[:space:]]Self-Denoising[[:space:]]Model[[:space:]]for[[:space:]]Robust[[:space:]]Few-Shot[[:space:]]Relation[[:space:]]Extraction/64725fbb-e947-4178-8e6a-10db5785bf00_origin.pdf filter=lfs diff=lfs merge=lfs -text
1414
+ 2025/A[[:space:]]Silver[[:space:]]Bullet[[:space:]]or[[:space:]]a[[:space:]]Compromise[[:space:]]for[[:space:]]Full[[:space:]]Attention_[[:space:]]A[[:space:]]Comprehensive[[:space:]]Study[[:space:]]of[[:space:]]Gist[[:space:]]Token-based[[:space:]]Context[[:space:]]Compression/1890f6c0-402a-41e3-97f8-31c20a7c5f6d_origin.pdf filter=lfs diff=lfs merge=lfs -text
1415
+ 2025/A[[:space:]]Spatio-Temporal[[:space:]]Point[[:space:]]Process[[:space:]]for[[:space:]]Fine-Grained[[:space:]]Modeling[[:space:]]of[[:space:]]Reading[[:space:]]Behavior/39d52ae8-2bd3-4fb1-973d-76c211b9989a_origin.pdf filter=lfs diff=lfs merge=lfs -text
1416
+ 2025/A[[:space:]]Statistical[[:space:]]and[[:space:]]Multi-Perspective[[:space:]]Revisiting[[:space:]]of[[:space:]]the[[:space:]]Membership[[:space:]]Inference[[:space:]]Attack[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/5215e8ba-949c-43e5-94a5-f152b5b01b8d_origin.pdf filter=lfs diff=lfs merge=lfs -text
1417
+ 2025/A[[:space:]]Strategic[[:space:]]Coordination[[:space:]]Framework[[:space:]]of[[:space:]]Small[[:space:]]LMs[[:space:]]Matches[[:space:]]Large[[:space:]]LMs[[:space:]]in[[:space:]]Data[[:space:]]Synthesis/4ee8bd4a-1f13-4007-9e61-a4dd4f6466fb_origin.pdf filter=lfs diff=lfs merge=lfs -text
1418
+ 2025/A[[:space:]]Survey[[:space:]]of[[:space:]]Post-Training[[:space:]]Scaling[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/93a04e31-898f-4145-8417-e1d9d558c443_origin.pdf filter=lfs diff=lfs merge=lfs -text
1419
+ 2025/A[[:space:]]Survey[[:space:]]on[[:space:]]Efficient[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Training_[[:space:]]From[[:space:]]Data-centric[[:space:]]Perspectives/656e18bc-0a9f-4cd9-b217-8b74e799ee17_origin.pdf filter=lfs diff=lfs merge=lfs -text
1420
+ 2025/When[[:space:]]Claims[[:space:]]Evolve_[[:space:]]Evaluating[[:space:]]and[[:space:]]Enhancing[[:space:]]the[[:space:]]Robustness[[:space:]]of[[:space:]]Embedding[[:space:]]Models[[:space:]]Against[[:space:]]Misinformation[[:space:]]Edits/589de4e0-f735-4055-adc3-477d4e1cec77_origin.pdf filter=lfs diff=lfs merge=lfs -text
1421
+ 2025/When[[:space:]]Detection[[:space:]]Fails_[[:space:]]The[[:space:]]Power[[:space:]]of[[:space:]]Fine-Tuned[[:space:]]Models[[:space:]]to[[:space:]]Generate[[:space:]]Human-Like[[:space:]]Social[[:space:]]Media[[:space:]]Text/62f50702-d4fd-4127-99bc-9fe033ef5c3f_origin.pdf filter=lfs diff=lfs merge=lfs -text
1422
+ 2025/When[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Meet[[:space:]]Speech_[[:space:]]A[[:space:]]Survey[[:space:]]on[[:space:]]Integration[[:space:]]Approaches/1f83b588-09ba-4ae8-acde-d7730cd6a377_origin.pdf filter=lfs diff=lfs merge=lfs -text
1423
+ 2025/When[[:space:]]Should[[:space:]]Dense[[:space:]]Retrievers[[:space:]]Be[[:space:]]Updated[[:space:]]in[[:space:]]Evolving[[:space:]]Corpora_[[:space:]]Detecting[[:space:]]Out-of-Distribution[[:space:]]Corpora[[:space:]]Using[[:space:]]GradNormIR/264c5cc2-5e57-46d9-ac99-957760894a10_origin.pdf filter=lfs diff=lfs merge=lfs -text
1424
+ 2025/Whether[[:space:]]LLMs[[:space:]]Know[[:space:]]If[[:space:]]They[[:space:]]Know_[[:space:]]Identifying[[:space:]]Knowledge[[:space:]]Boundaries[[:space:]]via[[:space:]]Debiased[[:space:]]Historical[[:space:]]In-Context[[:space:]]Learning/5ea11962-43cb-48a9-b58d-3830a9197762_origin.pdf filter=lfs diff=lfs merge=lfs -text
1425
+ 2025/Which[[:space:]]Retain[[:space:]]Set[[:space:]]Matters[[:space:]]for[[:space:]]LLM[[:space:]]Unlearning_[[:space:]]A[[:space:]]Case[[:space:]]Study[[:space:]]on[[:space:]]Entity[[:space:]]Unlearning/dc45fa0c-8cbc-42ed-a990-04b1541f3ad4_origin.pdf filter=lfs diff=lfs merge=lfs -text
1426
+ 2025/Who[[:space:]]Can[[:space:]]Withstand[[:space:]]Chat-Audio[[:space:]]Attacks_[[:space:]]An[[:space:]]Evaluation[[:space:]]Benchmark[[:space:]]for[[:space:]]Large[[:space:]]Audio-Language[[:space:]]Models/1eb1070c-4c29-48c4-b963-61128c320dd3_origin.pdf filter=lfs diff=lfs merge=lfs -text
1427
+ 2025/Who[[:space:]]Taught[[:space:]]You[[:space:]]That_[[:space:]]Tracing[[:space:]]Teachers[[:space:]]in[[:space:]]Model[[:space:]]Distillation/6cc4033d-e657-4aea-80ca-5257203d706c_origin.pdf filter=lfs diff=lfs merge=lfs -text
1428
+ 2025/Why[[:space:]]Are[[:space:]]Positional[[:space:]]Encodings[[:space:]]Nonessential[[:space:]]for[[:space:]]Deep[[:space:]]Autoregressive[[:space:]]Transformers_[[:space:]]A[[:space:]]Petroglyph[[:space:]]Revisited/9bcc2f38-6b87-4223-bbd9-e466028dbbef_origin.pdf filter=lfs diff=lfs merge=lfs -text
1429
+ 2025/Why[[:space:]]Multi-Interest[[:space:]]Fairness[[:space:]]Matters_[[:space:]]Hypergraph[[:space:]]Contrastive[[:space:]]Multi-Interest[[:space:]]Learning[[:space:]]for[[:space:]]Fair[[:space:]]Conversational[[:space:]]Recommender[[:space:]]System/ba04020d-a1ea-4d38-823e-d1ade1adf60b_origin.pdf filter=lfs diff=lfs merge=lfs -text
1430
+ 2025/Why[[:space:]]Not[[:space:]]Act[[:space:]]on[[:space:]]What[[:space:]]You[[:space:]]Know_[[:space:]]Unleashing[[:space:]]Safety[[:space:]]Potential[[:space:]]of[[:space:]]LLMs[[:space:]]via[[:space:]]Self-Aware[[:space:]]Guard[[:space:]]Enhancement/f319b59d-7664-4c56-bd47-fdd2d74250d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
1431
+ 2025/Why[[:space:]]Uncertainty[[:space:]]Estimation[[:space:]]Methods[[:space:]]Fall[[:space:]]Short[[:space:]]in[[:space:]]RAG_[[:space:]]An[[:space:]]Axiomatic[[:space:]]Analysis/edda7477-e37f-4b2f-a62f-268643b6af4a_origin.pdf filter=lfs diff=lfs merge=lfs -text
1432
+ 2025/Why[[:space:]]Vision[[:space:]]Language[[:space:]]Models[[:space:]]Struggle[[:space:]]with[[:space:]]Visual[[:space:]]Arithmetic_[[:space:]]Towards[[:space:]]Enhanced[[:space:]]Chart[[:space:]]and[[:space:]]Geometry[[:space:]]Understanding/3b8fe9b7-b59e-4c0c-9905-2492e3da44d2_origin.pdf filter=lfs diff=lfs merge=lfs -text
1433
+ 2025/WikiMixQA_[[:space:]]A[[:space:]]Multimodal[[:space:]]Benchmark[[:space:]]for[[:space:]]Question[[:space:]]Answering[[:space:]]over[[:space:]]Tables[[:space:]]and[[:space:]]Charts/01a24fc7-0523-4bab-ab05-40d52d08da67_origin.pdf filter=lfs diff=lfs merge=lfs -text
1434
+ 2025/WirelessMathBench_[[:space:]]A[[:space:]]Mathematical[[:space:]]Modeling[[:space:]]Benchmark[[:space:]]for[[:space:]]LLMs[[:space:]]in[[:space:]]Wireless[[:space:]]Communications/dae760f1-9a5e-4d0b-bddf-b160ecf7113f_origin.pdf filter=lfs diff=lfs merge=lfs -text
1435
+ 2025/Word[[:space:]]Form[[:space:]]Matters_[[:space:]]LLMs’[[:space:]]Semantic[[:space:]]Reconstruction[[:space:]]under[[:space:]]Typoglycemia/54c9d1f5-316c-4708-9767-42c97f6bcb33_origin.pdf filter=lfs diff=lfs merge=lfs -text
1436
+ 2025/Word-Level[[:space:]]Detection[[:space:]]of[[:space:]]Code-Mixed[[:space:]]Hate[[:space:]]Speech[[:space:]]with[[:space:]]Multilingual[[:space:]]Domain[[:space:]]Transfer/3d838265-2b2e-407c-b326-4b60e1dacee0_origin.pdf filter=lfs diff=lfs merge=lfs -text
1437
+ 2025/Word2Passage_[[:space:]]Word-level[[:space:]]Importance[[:space:]]Re-weighting[[:space:]]for[[:space:]]Query[[:space:]]Expansion/99aad441-dee8-4b56-bd66-956b9fe80ad8_origin.pdf filter=lfs diff=lfs merge=lfs -text
1438
+ 2025/World[[:space:]]Knowledge[[:space:]]Resolves[[:space:]]Some[[:space:]]Aspectual[[:space:]]Ambiguity/88c07db8-0b26-4f24-91ac-9be6172b22d2_origin.pdf filter=lfs diff=lfs merge=lfs -text
1439
+ 2025/Worse[[:space:]]than[[:space:]]Random_[[:space:]]An[[:space:]]Embarrassingly[[:space:]]Simple[[:space:]]Probing[[:space:]]Evaluation[[:space:]]of[[:space:]]Large[[:space:]]Multimodal[[:space:]]Models[[:space:]]in[[:space:]]Medical[[:space:]]VQA/77b92a8d-3d72-48ee-b1ec-a3edf83d0dc6_origin.pdf filter=lfs diff=lfs merge=lfs -text
1440
+ 2025/X-WebAgentBench_[[:space:]]A[[:space:]]Multilingual[[:space:]]Interactive[[:space:]]Web[[:space:]]Benchmark[[:space:]]for[[:space:]]Evaluating[[:space:]]Global[[:space:]]Agentic[[:space:]]System/153c561c-1a4f-480f-9613-7fe86f4edb47_origin.pdf filter=lfs diff=lfs merge=lfs -text
1441
+ 2025/XFinBench_[[:space:]]Benchmarking[[:space:]]LLMs[[:space:]]in[[:space:]]Complex[[:space:]]Financial[[:space:]]Problem[[:space:]]Solving[[:space:]]and[[:space:]]Reasoning/612d8759-eeb8-4607-aa2f-d82a58a9d290_origin.pdf filter=lfs diff=lfs merge=lfs -text
1442
+ 2025/YinYang-Align_[[:space:]]A[[:space:]]new[[:space:]]Benchmark[[:space:]]for[[:space:]]Competing[[:space:]]Objectives[[:space:]]and[[:space:]]Introducing[[:space:]]Multi-Objective[[:space:]]Preference[[:space:]]based[[:space:]]Text-to-Image[[:space:]]Alignment/b40e5da8-8188-4d4f-8956-73339e108537_origin.pdf filter=lfs diff=lfs merge=lfs -text
1443
+ 2025/You[[:space:]]need[[:space:]]to[[:space:]]MIMIC[[:space:]]to[[:space:]]get[[:space:]]FAME_[[:space:]]Solving[[:space:]]Meeting[[:space:]]Transcript[[:space:]]Scarcity[[:space:]]with[[:space:]]Multi-Agent[[:space:]]Conversations/4d89e9cc-c2b0-4852-9003-93274ae27533_origin.pdf filter=lfs diff=lfs merge=lfs -text
1444
+ 2025/Your[[:space:]]Language[[:space:]]Model[[:space:]]May[[:space:]]Think[[:space:]]Too[[:space:]]Rigidly_[[:space:]]Achieving[[:space:]]Reasoning[[:space:]]Consistency[[:space:]]with[[:space:]]Symmetry-Enhanced[[:space:]]Training/f15e1f45-65cb-483d-a727-0e7d07698c56_origin.pdf filter=lfs diff=lfs merge=lfs -text
1445
+ 2025/Zero-Shot[[:space:]]Conversational[[:space:]]Stance[[:space:]]Detection_[[:space:]]Dataset[[:space:]]and[[:space:]]Approaches/a9df341d-cfdb-4bbe-a575-16a926ee9227_origin.pdf filter=lfs diff=lfs merge=lfs -text
1446
+ 2025/ZeroDL_[[:space:]]Zero-shot[[:space:]]Distribution[[:space:]]Learning[[:space:]]for[[:space:]]Text[[:space:]]Clustering[[:space:]]via[[:space:]]Large[[:space:]]Language[[:space:]]Models/0e3a889e-8ec2-403a-8f54-91ee28e7b6c2_origin.pdf filter=lfs diff=lfs merge=lfs -text
1447
+ 2025/ZeroNER_[[:space:]]Fueling[[:space:]]Zero-Shot[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition[[:space:]]via[[:space:]]Entity[[:space:]]Type[[:space:]]Descriptions/95064bec-0126-4b2d-aa15-4202320509c9_origin.pdf filter=lfs diff=lfs merge=lfs -text
1448
+ 2025/daDPO_[[:space:]]Distribution-Aware[[:space:]]DPO[[:space:]]for[[:space:]]Distilling[[:space:]]Conversational[[:space:]]Abilities/a50442e9-c6bd-4ea8-a916-614849ef3c84_origin.pdf filter=lfs diff=lfs merge=lfs -text
1449
+ 2025/gMBA_[[:space:]]Expression[[:space:]]Semantic[[:space:]]Guided[[:space:]]Mixed[[:space:]]Boolean-Arithmetic[[:space:]]Deobfuscation[[:space:]]Using[[:space:]]Transformer[[:space:]]Architectures/8e47b93f-11cd-4e53-a464-013349fdee05_origin.pdf filter=lfs diff=lfs merge=lfs -text
1450
+ 2025/iAgent_[[:space:]]LLM[[:space:]]Agent[[:space:]]as[[:space:]]a[[:space:]]Shield[[:space:]]between[[:space:]]User[[:space:]]and[[:space:]]Recommender[[:space:]]Systems/a49f3f34-8a9e-4b16-b794-1f69c0526774_origin.pdf filter=lfs diff=lfs merge=lfs -text
1451
+ 2025/iMOVE[[:space:]]_[[:space:]]Instance-Motion-Aware[[:space:]]Video[[:space:]]Understanding/02a545b5-0571-4f10-9e48-e54c99f53fce_origin.pdf filter=lfs diff=lfs merge=lfs -text
1452
+ 2025/mOSCAR_[[:space:]]A[[:space:]]Large-scale[[:space:]]Multilingual[[:space:]]and[[:space:]]Multimodal[[:space:]]Document-level[[:space:]]Corpus/c8098ead-4e4e-4718-99ba-fc1455df6446_origin.pdf filter=lfs diff=lfs merge=lfs -text
1453
+ 2025/mRAKL_[[:space:]]Multilingual[[:space:]]Retrieval-Augmented[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Construction[[:space:]]for[[:space:]]Low-Resourced[[:space:]]Languages/722cb7a6-9ad8-4812-be94-a001edc4bb2b_origin.pdf filter=lfs diff=lfs merge=lfs -text
1454
+ 2025/mStyleDistance_[[:space:]]Multilingual[[:space:]]Style[[:space:]]Embeddings[[:space:]]and[[:space:]]their[[:space:]]Evaluation/cc27d87b-0e0a-46da-ae7e-0d2e4d938bb3_origin.pdf filter=lfs diff=lfs merge=lfs -text
1455
+ 2025/mmE5_[[:space:]]Improving[[:space:]]Multimodal[[:space:]]Multilingual[[:space:]]Embeddings[[:space:]]via[[:space:]]High-quality[[:space:]]Synthetic[[:space:]]Data/8857e98b-795a-4cd9-9360-f86e106decd8_origin.pdf filter=lfs diff=lfs merge=lfs -text
1456
+ 2025/scRAG_[[:space:]]Hybrid[[:space:]]Retrieval-Augmented[[:space:]]Generation[[:space:]]for[[:space:]]LLM-based[[:space:]]Cross-Tissue[[:space:]]Single-Cell[[:space:]]Annotation/fafe992a-7b6b-4330-a880-ec05f1d2ed2f_origin.pdf filter=lfs diff=lfs merge=lfs -text
1457
+ 2025/skLEP_[[:space:]]A[[:space:]]Slovak[[:space:]]General[[:space:]]Language[[:space:]]Understanding[[:space:]]Benchmark/4ae617a8-cccc-49b8-93bc-8f6cdeca4617_origin.pdf filter=lfs diff=lfs merge=lfs -text
1458
+ 2025/taz2024full_[[:space:]]Analysing[[:space:]]German[[:space:]]Newspapers[[:space:]]for[[:space:]]Gender[[:space:]]Bias[[:space:]]and[[:space:]]Discrimination[[:space:]]across[[:space:]]Decades/14dbc099-9840-4209-ac84-6677bdd41360_origin.pdf filter=lfs diff=lfs merge=lfs -text
1459
+ 2025/‘No’[[:space:]]Matters_[[:space:]]Out-of-Distribution[[:space:]]Detection[[:space:]]in[[:space:]]Multimodality[[:space:]]Multi-Turn[[:space:]]Interactive[[:space:]]Dialogue[[:space:]]Download[[:space:]]PDF/e6d9d3c8-4c5c-4683-ab40-ebf5de0e2029_origin.pdf filter=lfs diff=lfs merge=lfs -text
1460
+ 2025/“I[[:space:]]understand[[:space:]]your[[:space:]]perspective”_[[:space:]]LLM[[:space:]]Persuasion[[:space:]]through[[:space:]]the[[:space:]]Lens[[:space:]]of[[:space:]]Communicative[[:space:]]Action[[:space:]]Theory/9f594e09-33c3-451a-8f39-1009ca6a1ede_origin.pdf filter=lfs diff=lfs merge=lfs -text
1461
+ 2025/“My[[:space:]]life[[:space:]]is[[:space:]]miserable,[[:space:]]have[[:space:]]to[[:space:]]sign[[:space:]]500[[:space:]]autographs[[:space:]]everyday”_[[:space:]]Exposing[[:space:]]Humblebragging,[[:space:]]the[[:space:]]Brags[[:space:]]in[[:space:]]Disguise/cfda795b-31b2-4211-bd66-e102d49b7189_origin.pdf filter=lfs diff=lfs merge=lfs -text
1462
+ 2025/“Well,[[:space:]]Keep[[:space:]]Thinking”_[[:space:]]Enhancing[[:space:]]LLM[[:space:]]Reasoning[[:space:]]with[[:space:]]Adaptive[[:space:]]Injection[[:space:]]Decoding/f157b6aa-8d21-42f5-9929-129a01b848ee_origin.pdf filter=lfs diff=lfs merge=lfs -text
1463
+ 2025/“You[[:space:]]are[[:space:]]Beautiful,[[:space:]]Body[[:space:]]Image[[:space:]]Stereotypes[[:space:]]are[[:space:]]Ugly!”[[:space:]]BIStereo_[[:space:]]A[[:space:]]Benchmark[[:space:]]to[[:space:]]Measure[[:space:]]Body[[:space:]]Image[[:space:]]Stereotypes[[:space:]]in[[:space:]]Language[[:space:]]Models/2d29a461-4c69-4b85-b3cc-3cf77845eb0b_origin.pdf filter=lfs diff=lfs merge=lfs -text
2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/f97898a2-2851-402f-a099-0d9e8bee2d55_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/f97898a2-2851-402f-a099-0d9e8bee2d55_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/f97898a2-2851-402f-a099-0d9e8bee2d55_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11679bf64f2f015b45b0f42fcf517285d02d4f56971eeac59e5b8c56e4939f46
3
+ size 15363673
2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e6235077255764bafe57b55b9708b75f0e6cfff1259a7cc6f4f19784865af0e
3
+ size 2867922
2025/(RSA)²_ A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/03c637d4-7f7c-452b-825b-cd37087e895b_content_list.json ADDED
@@ -0,0 +1,1729 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "500xCompressor: Generalized Prompt Compression for Large Language Models",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 226,
8
+ 90,
9
+ 771,
10
+ 130
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Zongqian Li \nUniversity of Cambridge \nz1510@cam.ac.uk",
17
+ "bbox": [
18
+ 146,
19
+ 158,
20
+ 349,
21
+ 206
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Yixuan Su \nUniversity of Cambridge \nys484@cam.ac.uk",
28
+ "bbox": [
29
+ 396,
30
+ 158,
31
+ 601,
32
+ 206
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Nigel Collier \nUniversity of Cambridge \nnhc30@cam.ac.uk",
39
+ "bbox": [
40
+ 648,
41
+ 158,
42
+ 852,
43
+ 206
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Abstract",
50
+ "text_level": 1,
51
+ "bbox": [
52
+ 260,
53
+ 260,
54
+ 339,
55
+ 275
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "Prompt compression is important for large language models (LLMs) to increase inference speed, reduce costs, and improve user experience. However, current methods face challenges such as low compression ratios and potential training-test overlap during evaluation. To address these issues, we propose 500xCompressor, a method that compresses natural language contexts into a minimum of one special token and demonstrates strong generalization ability. The 500xCompressor introduces approximately $0.3\\%$ additional parameters and achieves compression ratios ranging from 6x to 500x, achieving $27 - 90\\%$ reduction in calculations and $55 - 83\\%$ memory savings when generating 100-400 tokens for new and reused prompts at 500x compression, while retaining $70 - 74\\%$ (F1) and $77 - 84\\%$ (Exact Match) of the LLM capabilities compared to using non-compressed prompts. It is designed to compress any text, answer various types of questions, and can be utilized by the original LLM without requiring fine-tuning. Initially, 500xCompressor was pretrained on the ArxivCorpus, followed by fine-tuning on the ArxivQA dataset, and subsequently evaluated on strictly unseen and cross-domain question answering (QA) datasets. This study shows that KV values outperform embeddings in preserving information at high compression ratios. The highly compressive nature of natural language prompts, even for detailed information, suggests potential for future applications and the development of a new LLM language.",
62
+ "bbox": [
63
+ 141,
64
+ 287,
65
+ 460,
66
+ 772
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "1 Introduction",
73
+ "text_level": 1,
74
+ "bbox": [
75
+ 114,
76
+ 793,
77
+ 258,
78
+ 808
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "Long prompts present several challenges in natural language processing applications, such as decreased inference speed, higher computation cost, and a negative influence on user experience. Additionally, the context length limit restricts model",
85
+ "bbox": [
86
+ 112,
87
+ 818,
88
+ 489,
89
+ 898
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "<https://github.com/ZongqianLi/500xCompressor>",
96
+ "bbox": [
97
+ 134,
98
+ 906,
99
+ 477,
100
+ 920
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "image",
106
+ "img_path": "images/c88f2f2ba27aaf18daa21dc5d042bfd8cf899928b2fb6ae02a70f13753832852.jpg",
107
+ "image_caption": [
108
+ "Figure 1: The original text is compressed by $500\\mathrm{x}$ Compressor and utilized for downstream tasks."
109
+ ],
110
+ "image_footnote": [],
111
+ "bbox": [
112
+ 524,
113
+ 258,
114
+ 863,
115
+ 602
116
+ ],
117
+ "page_idx": 0
118
+ },
119
+ {
120
+ "type": "text",
121
+ "text": "performance and application scenarios, creating a strong demand for prompt length reduction.",
122
+ "bbox": [
123
+ 507,
124
+ 676,
125
+ 882,
126
+ 708
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "text",
132
+ "text": "Two primary methods for prompt compression have been proposed: hard prompt and soft prompt. Hard prompt methods, such as SelectiveSentence (Li et al., 2023) and LLMLingua (Jiang et al., 2023a), eliminate low-information sentences, words, or even tokens. On the other hand, soft prompt methods, including GIST (Mu et al., 2024), AutoCompressor (Chevalier et al., 2023), and ICAE (Ge et al., 2024), compress natural language tokens into a small number of special tokens. However, these methods have problems such as low compression ratios (low efficiency improvement), unclear information loss, and potential",
133
+ "bbox": [
134
+ 507,
135
+ 712,
136
+ 884,
137
+ 921
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "page_number",
143
+ "text": "25081",
144
+ "bbox": [
145
+ 473,
146
+ 927,
147
+ 522,
148
+ 940
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "footer",
154
+ "text": "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 25081-25091 July 27 - August 1, 2025 ©2025 Association for Computational Linguistics",
155
+ "bbox": [
156
+ 82,
157
+ 945,
158
+ 912,
159
+ 973
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "image",
165
+ "img_path": "images/9843984ec7c636a0bb3ae0a5acda0d51c44f6d6be5b60801de05bb8210dc47ff.jpg",
166
+ "image_caption": [
167
+ "Figure 2: Process of pretraining (left), fine-tuning (middle), and prediction (right) with $500\\mathrm{x}$ Compressor."
168
+ ],
169
+ "image_footnote": [],
170
+ "bbox": [
171
+ 124,
172
+ 80,
173
+ 872,
174
+ 247
175
+ ],
176
+ "page_idx": 1
177
+ },
178
+ {
179
+ "type": "text",
180
+ "text": "training-test overlap during evaluation, as discussed in detail in Section 6. For instance, ICAE achieves compression ratios no higher than $15\\mathrm{x}$ , and the win rate evaluation metric cannot quantitatively measure the extent of information loss during compression. Additionally, evaluation texts sourced from the Wikipedia dataset might overlap with the training data for LLaMa series models (Grattafori et al., 2024), raising questions that the generated content could be retrieved from the memory of the LLM rather than the compressed prompts.",
181
+ "bbox": [
182
+ 112,
183
+ 298,
184
+ 489,
185
+ 491
186
+ ],
187
+ "page_idx": 1
188
+ },
189
+ {
190
+ "type": "text",
191
+ "text": "To solve these problems, we propose 500xCompressor, illustrated in Figure 1. This method compresses prompts of approximately 500 tokens into a minimum of one token, allowing the compressed tokens to regenerate the original texts or be used for QA. Although trained on the ArxivCorpus and ArxivQA dataset, 500xCompressor could generalize to answer other types of questions. Analysis demonstrates that detailed information, such as proper nouns, special names, and numbers, could be accurately compressed and retrieved.",
192
+ "bbox": [
193
+ 112,
194
+ 497,
195
+ 489,
196
+ 674
197
+ ],
198
+ "page_idx": 1
199
+ },
200
+ {
201
+ "type": "text",
202
+ "text": "500xCompressor retains the advantages of previous methods and introduces several additional characteristics. Similar to previous soft prompt methods, 500xCompressor is generalized and nonselective, capable of compressing unseen texts across various topics for QA, demonstrating its generalization ability. Unlike selective compression methods, 500xCompressor aims to regenerate the entire original text, ensuring that all tokens in the original text contribute to the compression tokens. Moreover, the compressed prompts could be used to regenerate original texts or for QA without requiring fine-tuning of the LLM, preserving the LLM's original capabilities and improving the convenience of using compressed tokens.",
203
+ "bbox": [
204
+ 112,
205
+ 680,
206
+ 489,
207
+ 921
208
+ ],
209
+ "page_idx": 1
210
+ },
211
+ {
212
+ "type": "text",
213
+ "text": "In addition to these existing advantages, we provide contributions in three main areas:",
214
+ "bbox": [
215
+ 507,
216
+ 299,
217
+ 884,
218
+ 330
219
+ ],
220
+ "page_idx": 1
221
+ },
222
+ {
223
+ "type": "list",
224
+ "sub_type": "text",
225
+ "list_items": [
226
+ "- High Compression Ratio: This study evaluates the compression model with one and sixteen tokens to compress up to 500 tokens, achieving compression ratios up to $500\\mathrm{x}$ . These ratios significantly outperform previous studies, which reported ratios of less than $50\\mathrm{x}$ , fully testing the upper limit of prompt compression.",
227
+ "- Strict Unseen Evaluation: Using Arxiv abstracts published post-January 2024 ensures evaluation on content unseen by both the LLM and compression model, verifying that outputs are from compressed prompts rather than pre-existing model knowledge.",
228
+ "- Quantitative Analysis of Information Loss: Through extractive QA with context-span answers, we realize direct quantitative comparison between compressed and uncompressed performance, providing precise measurements of compression-resulting information loss."
229
+ ],
230
+ "bbox": [
231
+ 507,
232
+ 338,
233
+ 884,
234
+ 645
235
+ ],
236
+ "page_idx": 1
237
+ },
238
+ {
239
+ "type": "text",
240
+ "text": "In this paper, the design of $500\\mathrm{x}$ Compressor is first introduced in Section 2, including how to train and use the compression model. After that, Section 3 explains the training and evaluation datasets, the baselines, and the evaluation metrics. The evaluation results for regeneration and QA are presented in Section 4, with ablation studies analyzing the variables influencing the compression models. This is followed by discussions in Section 5, and finally, the sections on related work and conclusions.",
241
+ "bbox": [
242
+ 507,
243
+ 655,
244
+ 884,
245
+ 815
246
+ ],
247
+ "page_idx": 1
248
+ },
249
+ {
250
+ "type": "text",
251
+ "text": "2 Methods",
252
+ "text_level": 1,
253
+ "bbox": [
254
+ 507,
255
+ 827,
256
+ 620,
257
+ 841
258
+ ],
259
+ "page_idx": 1
260
+ },
261
+ {
262
+ "type": "text",
263
+ "text": "2.1 Training",
264
+ "text_level": 1,
265
+ "bbox": [
266
+ 507,
267
+ 852,
268
+ 623,
269
+ 866
270
+ ],
271
+ "page_idx": 1
272
+ },
273
+ {
274
+ "type": "text",
275
+ "text": "The training process for the compression model is illustrated in Figure 2, including both pretraining and fine-tuning stages. The compression model",
276
+ "bbox": [
277
+ 507,
278
+ 873,
279
+ 880,
280
+ 921
281
+ ],
282
+ "page_idx": 1
283
+ },
284
+ {
285
+ "type": "page_number",
286
+ "text": "25082",
287
+ "bbox": [
288
+ 475,
289
+ 927,
290
+ 524,
291
+ 940
292
+ ],
293
+ "page_idx": 1
294
+ },
295
+ {
296
+ "type": "text",
297
+ "text": "comprises two components: an encoder and a decoder, which is similar to an autoencoder and comparable to ICAE. The encoder is the frozen LLM $\\Theta_{\\mathrm{LLM}}$ with trainable LoRA parameters $\\Theta_{\\mathrm{Lora}}$ (Hu et al., 2022), while the decoder is the original frozen LLM $\\Theta_{\\mathrm{LLM}}$ . The encoder receives the original text tokens $\\mathbf{T} = (t_1, t_2, \\dots, t_l)$ and the compression tokens $\\mathbf{C} = (c_1, c_2, \\dots, c_k)$ . Through layers, the information in the text is saved into the compression tokens, whose KV values in each layer of the LLM $\\mathbf{H}_{\\mathbf{C}}$ are output and passed to the decoder.",
298
+ "bbox": [
299
+ 112,
300
+ 84,
301
+ 489,
302
+ 275
303
+ ],
304
+ "page_idx": 2
305
+ },
306
+ {
307
+ "type": "text",
308
+ "text": "During pretraining, the inputs of the decoder are the KV values of the compression tokens from the encoder, the beginning of sequence token, and the original text tokens $(\\mathbf{H}_{\\mathbf{C}},[\\mathbf{BOS}],\\mathbf{T})$ . The LLM is trained to regenerate the original text based on the KV values, using the end of sequence token [EOS] to denote when to stop. The cross-entropy loss between the output of the decoder and the original text is calculated and used to train the LoRA parameters in the encoder:",
309
+ "bbox": [
310
+ 112,
311
+ 278,
312
+ 489,
313
+ 439
314
+ ],
315
+ "page_idx": 2
316
+ },
317
+ {
318
+ "type": "equation",
319
+ "text": "\n$$\n\\mathcal {L} _ {\\mathrm {P}} = - \\sum_ {i = 1} ^ {l} \\log P \\left(t _ {i} \\mid \\mathbf {H} _ {\\mathbf {C}}, [ \\mathbf {B O S} ], t _ {1: i - 1}; \\Theta_ {\\mathrm {L L M}}, \\Theta_ {\\mathrm {L o r a}}\\right) \\tag {1}\n$$\n",
320
+ "text_format": "latex",
321
+ "bbox": [
322
+ 126,
323
+ 462,
324
+ 487,
325
+ 508
326
+ ],
327
+ "page_idx": 2
328
+ },
329
+ {
330
+ "type": "text",
331
+ "text": "For instruction fine-tuning, the process is similar to pretraining. However, instead of the original texts, the decoder is provided with questions $\\mathbf{Q} = (q_{1}, q_{2}, \\ldots, q_{m})$ and answers $\\mathbf{A} = (a_{1}, a_{2}, \\ldots, a_{n})$ , which are used to train the LLM to retrieve information from the KV values of the compression tokens and generate answers:",
332
+ "bbox": [
333
+ 112,
334
+ 511,
335
+ 489,
336
+ 623
337
+ ],
338
+ "page_idx": 2
339
+ },
340
+ {
341
+ "type": "equation",
342
+ "text": "\n$$\n\\mathcal {L} _ {\\mathrm {F}} = - \\sum_ {j = 1} ^ {n} \\log P \\left(a _ {j} \\mid \\mathbf {H} _ {\\mathbf {C}}, q _ {1: m}, a _ {1: j - 1}; \\Theta_ {\\text {L L M}}, \\Theta_ {\\text {L o r a}}\\right) \\tag {2}\n$$\n",
343
+ "text_format": "latex",
344
+ "bbox": [
345
+ 122,
346
+ 646,
347
+ 487,
348
+ 681
349
+ ],
350
+ "page_idx": 2
351
+ },
352
+ {
353
+ "type": "text",
354
+ "text": "The training process ensures no training-test overlap, as the original LLM parameters in both the encoder and decoder remain unchanged, and no additional parameters are introduced in the decoder. Thus, no information is saved in the decoder.",
355
+ "bbox": [
356
+ 112,
357
+ 695,
358
+ 487,
359
+ 772
360
+ ],
361
+ "page_idx": 2
362
+ },
363
+ {
364
+ "type": "text",
365
+ "text": "Main differences between 500xCompressor and ICAE: (1) The input of ICAE decoder is the output embeddings for the compression tokens, whereas 500xCompressor uses the KV values for the compression tokens. KV values could save more information and do not increase inference time. (2) In addition, this paper uses the [BOS] token to guide the LLM to regenerate the compressed texts, while ICAE creates a trainable new token.",
366
+ "bbox": [
367
+ 112,
368
+ 776,
369
+ 489,
370
+ 920
371
+ ],
372
+ "page_idx": 2
373
+ },
374
+ {
375
+ "type": "text",
376
+ "text": "2.2 Prediction",
377
+ "text_level": 1,
378
+ "bbox": [
379
+ 509,
380
+ 84,
381
+ 638,
382
+ 99
383
+ ],
384
+ "page_idx": 2
385
+ },
386
+ {
387
+ "type": "text",
388
+ "text": "During prediction, all encoder and decoder parameters are frozen. The original text is fed into the encoder, which saves the information into compression tokens. These compression tokens' KV values are then input into the decoder, which regenerates the compressed text when guided by the [BOS] token or generates an answer based on a given question:",
389
+ "bbox": [
390
+ 507,
391
+ 105,
392
+ 885,
393
+ 235
394
+ ],
395
+ "page_idx": 2
396
+ },
397
+ {
398
+ "type": "equation",
399
+ "text": "\n$$\n\\hat {t} _ {i} = \\underset {\\hat {t} _ {i}} {\\arg \\max } P (\\hat {t} _ {i} | \\mathbf {H} _ {\\mathbf {C}}, [ \\mathbf {B O S} ], \\hat {t} _ {1: i - 1}; \\boldsymbol {\\Theta} _ {\\mathrm {L L M}}) \\tag {3}\n$$\n",
400
+ "text_format": "latex",
401
+ "bbox": [
402
+ 547,
403
+ 256,
404
+ 882,
405
+ 280
406
+ ],
407
+ "page_idx": 2
408
+ },
409
+ {
410
+ "type": "equation",
411
+ "text": "\n$$\n\\hat {a} _ {j} = \\arg \\max _ {\\hat {a} _ {j}} P (\\hat {a} _ {j} | \\mathbf {H} _ {\\mathbf {C}}, q _ {1: m}, \\hat {a} _ {1: j - 1}; \\boldsymbol {\\Theta} _ {\\mathrm {L L M}}) \\tag {4}\n$$\n",
412
+ "text_format": "latex",
413
+ "bbox": [
414
+ 549,
415
+ 297,
416
+ 882,
417
+ 319
418
+ ],
419
+ "page_idx": 2
420
+ },
421
+ {
422
+ "type": "text",
423
+ "text": "where $\\hat{t}_i$ denotes the $i$ -th token in the regenerated text, and $\\hat{a}_j$ indicates the $j$ -th token in the generated answer.",
424
+ "bbox": [
425
+ 507,
426
+ 328,
427
+ 882,
428
+ 374
429
+ ],
430
+ "page_idx": 2
431
+ },
432
+ {
433
+ "type": "text",
434
+ "text": "By replacing the original text tokens with compressed tokens, the speed of answering questions is increased. This is because, in inference, each token in the question or generated answer must attend to the previous tokens. Replacing a large number of original text tokens with a small number of compressed tokens reduces computational needs.",
435
+ "bbox": [
436
+ 507,
437
+ 376,
438
+ 882,
439
+ 504
440
+ ],
441
+ "page_idx": 2
442
+ },
443
+ {
444
+ "type": "text",
445
+ "text": "3 Experiments",
446
+ "text_level": 1,
447
+ "bbox": [
448
+ 507,
449
+ 519,
450
+ 655,
451
+ 535
452
+ ],
453
+ "page_idx": 2
454
+ },
455
+ {
456
+ "type": "text",
457
+ "text": "3.1 Datasets",
458
+ "text_level": 1,
459
+ "bbox": [
460
+ 507,
461
+ 545,
462
+ 623,
463
+ 558
464
+ ],
465
+ "page_idx": 2
466
+ },
467
+ {
468
+ "type": "text",
469
+ "text": "The ArxivCorpus was used to pretrain 500xCompressor, and the compression model was then finetuned on the ArxivQA dataset. After that, six benchmarks with different context lengths were used to evaluate the compression models for various abilities: ArxivQA and TriviaQA (Joshi et al., 2017) for information extraction, RelationExtraction (Levy et al., 2017) for relation extraction, NaturalQuestions (Kwiatkowski et al., 2019) and TextbookQA (Kembhavi et al., 2017) for reading comprehension, and RACE (Lai et al., 2017) for reasoning. Among these datasets, ArxivQA is introduced in this paper, while the others are classical QA datasets from MRQA (Fisch et al., 2019).",
470
+ "bbox": [
471
+ 507,
472
+ 567,
473
+ 882,
474
+ 791
475
+ ],
476
+ "page_idx": 2
477
+ },
478
+ {
479
+ "type": "text",
480
+ "text": "The ArxivCorpus comprises Arxiv paper abstracts published before April 2024, with pre-July 2023 papers forming the training set and post-January 2024 papers forming the development and test sets. Test set abstracts are selected by token lengths (96, 192, 288, 384, and 480) to evaluate the regeneration performance of prompt compression methods.",
481
+ "bbox": [
482
+ 507,
483
+ 793,
484
+ 882,
485
+ 919
486
+ ],
487
+ "page_idx": 2
488
+ },
489
+ {
490
+ "type": "page_number",
491
+ "text": "25083",
492
+ "bbox": [
493
+ 475,
494
+ 927,
495
+ 524,
496
+ 940
497
+ ],
498
+ "page_idx": 2
499
+ },
500
+ {
501
+ "type": "text",
502
+ "text": "The ArxivCorpus was chosen for several reasons: (1) High-quality academic content with clear publication timestamps, (2) Verified temporal separation from LLaMa-3's March 2023 knowledge cutoff, ensuring the regenerated texts are based on the compressed prompts rather than the memory of the LLM, and (3) Official distribution through Cornell University, addressing copyright problems that influence datasets like Pile.",
503
+ "bbox": [
504
+ 112,
505
+ 84,
506
+ 487,
507
+ 227
508
+ ],
509
+ "page_idx": 3
510
+ },
511
+ {
512
+ "type": "text",
513
+ "text": "The ArxivQA dataset (more than 250k QA pairs), derived from ArxivCorpus using LLaMa-3-70b-Instruct, contains extractive QA pairs with the number of QA pairs increasing proportionally with abstract length (starting with 5 pairs per 96-token abstract). Training and development QA pairs are generated from the training set abstracts, while test set pairs come from test set abstracts.",
514
+ "bbox": [
515
+ 112,
516
+ 229,
517
+ 487,
518
+ 356
519
+ ],
520
+ "page_idx": 3
521
+ },
522
+ {
523
+ "type": "text",
524
+ "text": "ArxivQA offers three main advantages: (1) Guaranteed LLM-unseen test contexts avoiding training-test overlap, (2) Extractive QA format allowing quantitative evaluation of information loss, and (3) Domain-specific questions generated by LLaMa-3-70b-Instruct based on ArxivCorpus ensuring both difficulty and quality.",
525
+ "bbox": [
526
+ 112,
527
+ 357,
528
+ 487,
529
+ 470
530
+ ],
531
+ "page_idx": 3
532
+ },
533
+ {
534
+ "type": "text",
535
+ "text": "3.2 Baselines and Gold Standard",
536
+ "text_level": 1,
537
+ "bbox": [
538
+ 112,
539
+ 481,
540
+ 386,
541
+ 495
542
+ ],
543
+ "page_idx": 3
544
+ },
545
+ {
546
+ "type": "text",
547
+ "text": "Two baseline approaches are chosen: LLMLingua2 (Pan et al., 2024), exemplifying hard prompt compression through selective token elimination, and ICAE, utilizing soft prompt compression via continuous vectors. Both methods process the compressed context alongside questions for LLM inference. The gold standard provides the LLM with the complete combination of instruction, uncompressed context, and question.",
548
+ "bbox": [
549
+ 112,
550
+ 502,
551
+ 487,
552
+ 646
553
+ ],
554
+ "page_idx": 3
555
+ },
556
+ {
557
+ "type": "text",
558
+ "text": "3.3 Evaluation Metrics",
559
+ "text_level": 1,
560
+ "bbox": [
561
+ 112,
562
+ 659,
563
+ 317,
564
+ 671
565
+ ],
566
+ "page_idx": 3
567
+ },
568
+ {
569
+ "type": "text",
570
+ "text": "For evaluating text regeneration, Rouge-2-F (based on bigram overlap) and BLEU (measuring n-gram precision) scores are used to assess the similarity between regenerated and original texts. For extractive QA tasks, F1 score (the harmonic mean of precision and recall) and Exact Match (EM) are used to measure answer accuracy. Higher scores in all these metrics indicate better performance, with a maximum value of $100\\%$ .",
571
+ "bbox": [
572
+ 112,
573
+ 678,
574
+ 487,
575
+ 822
576
+ ],
577
+ "page_idx": 3
578
+ },
579
+ {
580
+ "type": "text",
581
+ "text": "3.4 Models",
582
+ "text_level": 1,
583
+ "bbox": [
584
+ 112,
585
+ 835,
586
+ 218,
587
+ 848
588
+ ],
589
+ "page_idx": 3
590
+ },
591
+ {
592
+ "type": "text",
593
+ "text": "The encoder of 500xCompressor is frozen LLaMA-3-8B-Instruct with trainable LoRA parameters (rank=64) and the decoder is frozen LLaMA-3-8B-Instruct (Grattafori et al., 2024).",
594
+ "bbox": [
595
+ 112,
596
+ 856,
597
+ 487,
598
+ 920
599
+ ],
600
+ "page_idx": 3
601
+ },
602
+ {
603
+ "type": "text",
604
+ "text": "4 Results",
605
+ "text_level": 1,
606
+ "bbox": [
607
+ 509,
608
+ 83,
609
+ 606,
610
+ 98
611
+ ],
612
+ "page_idx": 3
613
+ },
614
+ {
615
+ "type": "text",
616
+ "text": "4.1 Efficiency Improvement",
617
+ "text_level": 1,
618
+ "bbox": [
619
+ 507,
620
+ 110,
621
+ 742,
622
+ 126
623
+ ],
624
+ "page_idx": 3
625
+ },
626
+ {
627
+ "type": "text",
628
+ "text": "Table 1 demonstrates the importance of prompt compression for efficiency gains, showing improvements in both first-time processing (new prompt) and cached processing scenarios (reused prompt) at $500\\mathrm{x}$ compression. For new prompts, while compression introduces a minimal computational cost $(+0.4\\%)$ , the savings increase substantially with output length, reaching $49.10\\%$ reduction in computation at 400 tokens. Reused prompts demonstrate immediate computational benefits, achieving $90.64\\%$ reduction at 100 tokens output length. For memory usage of KV cache, reused prompts achieve $99.80\\%$ initial memory reduction, and both prompts show consistent memory savings from $83.16\\%$ to $55.33\\%$ as output length increases to 400 tokens. Given that real-world applications often involve batch processing and repeated access to the same content, these efficiency gains make $500\\mathrm{x}$ Compressor valuable in real-world scenarios.",
629
+ "bbox": [
630
+ 505,
631
+ 131,
632
+ 882,
633
+ 437
634
+ ],
635
+ "page_idx": 3
636
+ },
637
+ {
638
+ "type": "table",
639
+ "img_path": "images/0e8ebfb4f60bb274570237a60647544ceb882692fdfe544d41b2829fa2f1ce53.jpg",
640
+ "table_caption": [],
641
+ "table_footnote": [],
642
+ "table_body": "<table><tr><td rowspan=\"2\">Output Length</td><td colspan=\"2\">Calculations</td><td colspan=\"2\">Memory</td></tr><tr><td>New prompt</td><td>Reused prompt</td><td>New prompt</td><td>Reused prompt</td></tr><tr><td>0</td><td>+0.4</td><td>0</td><td>+0.2</td><td>-99.80</td></tr><tr><td>100</td><td>-27.39</td><td>-90.64</td><td>-83.16</td><td>-83.16</td></tr><tr><td>200</td><td>-40.47</td><td>-83.09</td><td>-71.28</td><td>-71.28</td></tr><tr><td>300</td><td>-46.56</td><td>-76.71</td><td>-62.37</td><td>-62.37</td></tr><tr><td>400</td><td>-49.10</td><td>-71.23</td><td>-55.33</td><td>-55.33</td></tr></table>",
643
+ "bbox": [
644
+ 532,
645
+ 449,
646
+ 858,
647
+ 549
648
+ ],
649
+ "page_idx": 3
650
+ },
651
+ {
652
+ "type": "text",
653
+ "text": "Table 1: Computation and memory savings (in percentage) achieved by $500\\mathrm{x}$ Compressor for different output lengths (token) at $500\\rightarrow 1$ compression. A new prompt refers to first-time processing of input, while a reused prompt denotes repeated processing that can utilize cached intermediate results.",
654
+ "bbox": [
655
+ 507,
656
+ 558,
657
+ 882,
658
+ 643
659
+ ],
660
+ "page_idx": 3
661
+ },
662
+ {
663
+ "type": "text",
664
+ "text": "4.2 Text Regeneration",
665
+ "text_level": 1,
666
+ "bbox": [
667
+ 507,
668
+ 674,
669
+ 697,
670
+ 689
671
+ ],
672
+ "page_idx": 3
673
+ },
674
+ {
675
+ "type": "text",
676
+ "text": "The text regeneration capabilities of different prompt compression methods are evaluated on the strictly unseen test set of ArxivCorpus. Table 2 shows the performance across varying context lengths and compression ratios, measured by Rouge-2-F and BLEU scores. Our analysis examines the overall advantages, influencing variables, and stability of the improvements.",
677
+ "bbox": [
678
+ 505,
679
+ 695,
680
+ 882,
681
+ 822
682
+ ],
683
+ "page_idx": 3
684
+ },
685
+ {
686
+ "type": "text",
687
+ "text": "500xCompressor demonstrates consistent better performance over ICAE across all test scenarios. Our method outperforms ICAE on both Rouge-2-F and BLEU metrics for all context lengths and compression ratios tested. Quantitatively, the average improvement for Rouge-2-F scores increases by",
688
+ "bbox": [
689
+ 507,
690
+ 825,
691
+ 882,
692
+ 921
693
+ ],
694
+ "page_idx": 3
695
+ },
696
+ {
697
+ "type": "page_number",
698
+ "text": "25084",
699
+ "bbox": [
700
+ 475,
701
+ 927,
702
+ 524,
703
+ 940
704
+ ],
705
+ "page_idx": 3
706
+ },
707
+ {
708
+ "type": "table",
709
+ "img_path": "images/324c5d9a518ec9bcfb0298d6ef3c3be3e9cbb409580b76a7497409b168a4e9d0.jpg",
710
+ "table_caption": [],
711
+ "table_footnote": [],
712
+ "table_body": "<table><tr><td rowspan=\"2\">Dataset Length Eval. Metrics</td><td colspan=\"2\">96</td><td colspan=\"2\">192</td><td colspan=\"2\">ArxivCorpus</td><td colspan=\"2\">384</td><td colspan=\"2\">480</td><td colspan=\"2\">Average</td></tr><tr><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td></tr><tr><td>Ours500→16</td><td>99.53</td><td>99.48</td><td>96.49</td><td>96.21</td><td>82.31</td><td>80.93</td><td>55.36</td><td>53.50</td><td>31.55</td><td>32.19</td><td>73.05</td><td>72.46</td></tr><tr><td>ICAE500→16</td><td>83.52</td><td>81.85</td><td>58.21</td><td>55.90</td><td>40.96</td><td>38.37</td><td>34.28</td><td>32.03</td><td>29.71</td><td>29.61</td><td>49.33</td><td>47.55</td></tr><tr><td>Absolute Δ</td><td>16.02</td><td>17.62</td><td>38.28</td><td>40.31</td><td>41.35</td><td>42.56</td><td>21.07</td><td>21.46</td><td>1.84</td><td>2.58</td><td>23.71</td><td>24.91</td></tr><tr><td>Relative Δ</td><td>19.19%</td><td>21.53%</td><td>65.76%</td><td>72.12%</td><td>100.96%</td><td>110.92%</td><td>61.47%</td><td>66.98%</td><td>6.20%</td><td>8.71%</td><td>48.07%</td><td>52.38%</td></tr><tr><td>Ours500→1</td><td>53.49</td><td>49.77</td><td>29.73</td><td>26.53</td><td>22.15</td><td>19.15</td><td>20.61</td><td>17.91</td><td>18.85</td><td>18.80</td><td>28.97</td><td>26.43</td></tr><tr><td>ICAE500→1</td><td>30.29</td><td>24.18</td><td>18.21</td><td>13.94</td><td>13.89</td><td>10.36</td><td>13.36</td><td>9.92</td><td>12.28</td><td>11.68</td><td>17.61</td><td>14.02</td></tr><tr><td>Absolute Δ</td><td>23.19</td><td>25.59</td><td>11.51</td><td>12.59</td><td>8.25</td><td>8.79</td><td>7.25</td><td>7.99</td><td>6.56</td><td>7.11</td><td>11.36</td><td>12.41</td></tr><tr><td>Relative Δ</td><td>76.58%</td><td>105.84%</td><td>63.22%</td><td>90.33%</td><td>59.45%</td><td>84.81%</td><td>54.30%</td><td>80.48%</td><td>53.44%</td><td>60.86%</td><td>64.50%</td><td>88.56%</td></tr></table>",
713
+ "bbox": [
714
+ 127,
715
+ 80,
716
+ 870,
717
+ 190
718
+ ],
719
+ "page_idx": 4
720
+ },
721
+ {
722
+ "type": "text",
723
+ "text": "Table 2: Quantitative evaluation of text regeneration performance on the ArxivCorpus dataset with strictly unseen texts. RG and BL denote Rouge-2-F and BLEU scores respectively. The notation $\\mathrm{X}\\rightarrow \\mathrm{Y}$ indicates compression from a maximum of X input tokens to Y compression tokens. Higher values between $500\\mathrm{xCompressor}$ (Ours) and ICAE baseline are shown in bold and their performance differences are shown by absolute and relative $\\Delta$ . All improvements (shown in green) demonstrate the consistent better performance of $500\\mathrm{xCompressor}$ across varying context lengths and compression ratios.",
724
+ "bbox": [
725
+ 112,
726
+ 200,
727
+ 882,
728
+ 286
729
+ ],
730
+ "page_idx": 4
731
+ },
732
+ {
733
+ "type": "text",
734
+ "text": "23.71 points $(48.07\\%)$ and 11.36 points $(64.50\\%)$ at $31.25\\mathrm{x}$ and $500\\mathrm{x}$ .",
735
+ "bbox": [
736
+ 112,
737
+ 311,
738
+ 485,
739
+ 341
740
+ ],
741
+ "page_idx": 4
742
+ },
743
+ {
744
+ "type": "text",
745
+ "text": "The regeneration performance exhibits clear dependencies on both compression ratio and context length. Lower compression ratios and shorter contexts yield higher text precision, with Rouge-2-F and BLEU scores consistently exceeding $95\\%$ in optimal conditions. As compression ratios increase, the relative improvements over ICAE become more obvious, showing relative gains of $64.50\\%$ in Rouge-2-F and $88.56\\%$ in BLEU scores. While performance naturally decreases with longer contexts, the decrease rate shows a stable trend across different compression scenarios.",
746
+ "bbox": [
747
+ 112,
748
+ 343,
749
+ 487,
750
+ 535
751
+ ],
752
+ "page_idx": 4
753
+ },
754
+ {
755
+ "type": "text",
756
+ "text": "The method exhibits consistent stability in performance gains. Both absolute and relative improvements remain uniform across Rouge-2-F and BLEU metrics, indicating robust improvement in regeneration quality regardless of the evaluation criteria used.",
757
+ "bbox": [
758
+ 112,
759
+ 536,
760
+ 487,
761
+ 631
762
+ ],
763
+ "page_idx": 4
764
+ },
765
+ {
766
+ "type": "text",
767
+ "text": "While the results above demonstrate good text regeneration ability, the true performance of compression is better shown in downstream QA tasks.",
768
+ "bbox": [
769
+ 112,
770
+ 633,
771
+ 487,
772
+ 681
773
+ ],
774
+ "page_idx": 4
775
+ },
776
+ {
777
+ "type": "text",
778
+ "text": "4.3 Question Answering",
779
+ "text_level": 1,
780
+ "bbox": [
781
+ 112,
782
+ 690,
783
+ 321,
784
+ 707
785
+ ],
786
+ "page_idx": 4
787
+ },
788
+ {
789
+ "type": "text",
790
+ "text": "Figure 3 shows the performance of different prompt compression methods across varying compression ratios on QA datasets. $500\\mathrm{x}$ Compressor consistently outperforms baseline methods at all compression ratios tested, confirming that KV values have advantages over embeddings (used in ICAE) in preserving information. Notably, as the compression ratio increases from $31.25\\mathrm{x}$ to $500\\mathrm{x}$ , $500\\mathrm{x}$ Compressor exhibits better performance retention, dropping from $74.53\\%$ to $70.60\\%$ (F1 score) and from $84.57\\%$ to $77.92\\%$ (Exact Match) of its uncompressed performance.",
791
+ "bbox": [
792
+ 112,
793
+ 711,
794
+ 487,
795
+ 904
796
+ ],
797
+ "page_idx": 4
798
+ },
799
+ {
800
+ "type": "text",
801
+ "text": "Tables 3 and 4 present evaluation results for",
802
+ "bbox": [
803
+ 131,
804
+ 904,
805
+ 487,
806
+ 920
807
+ ],
808
+ "page_idx": 4
809
+ },
810
+ {
811
+ "type": "image",
812
+ "img_path": "images/2df329bbb7090c3470d1949f910d488c466523b621f7d0f477c8976882c4c7b0.jpg",
813
+ "image_caption": [
814
+ "Figure 3: Performance comparison of prompt compression methods on in-domain and cross-domain QA datasets across varying compression ratios. Y-axis shows F1 scores normalized by uncompressed performance, while X-axis (log scale) denotes compression ratios defined as #maximum_uncompressed_tokens/#compression_tokens. $\\uparrow$ indicates higher values are better."
815
+ ],
816
+ "image_footnote": [],
817
+ "bbox": [
818
+ 564,
819
+ 311,
820
+ 830,
821
+ 495
822
+ ],
823
+ "page_idx": 4
824
+ },
825
+ {
826
+ "type": "text",
827
+ "text": "500xCompressor on in-domain and cross-domain QA datasets. These results are analyzed from five aspects: overall performance, influencing variables, generalization capability, scalability, and stability.",
828
+ "bbox": [
829
+ 507,
830
+ 656,
831
+ 882,
832
+ 720
833
+ ],
834
+ "page_idx": 4
835
+ },
836
+ {
837
+ "type": "text",
838
+ "text": "Overall Performance 500xCompressor demonstrates higher performance across nearly all context lengths, compression ratios, and both in-domain and cross-domain datasets compared to ICAE and LLMLingua-2. In cross-domain evaluation, it achieves average improvements of 7.10 F1 and 7.61 EM points (19.94% and 37.66% relative) at $500\\rightarrow 16$ compression, with improvements increasing to 21.93 F1 and 16.64 EM points (107.70% and 161.58% relative) at $500\\rightarrow 1$ compression.",
839
+ "bbox": [
840
+ 507,
841
+ 724,
842
+ 882,
843
+ 885
844
+ ],
845
+ "page_idx": 4
846
+ },
847
+ {
848
+ "type": "text",
849
+ "text": "Performance variables The effectiveness of compression is influenced by both context length",
850
+ "bbox": [
851
+ 507,
852
+ 889,
853
+ 882,
854
+ 921
855
+ ],
856
+ "page_idx": 4
857
+ },
858
+ {
859
+ "type": "page_number",
860
+ "text": "25085",
861
+ "bbox": [
862
+ 475,
863
+ 927,
864
+ 524,
865
+ 940
866
+ ],
867
+ "page_idx": 4
868
+ },
869
+ {
870
+ "type": "table",
871
+ "img_path": "images/d4aa424f2e0a8033a4a927d90eaac54fb088d0e4a85dc4d68973644874b2e332.jpg",
872
+ "table_caption": [],
873
+ "table_footnote": [],
874
+ "table_body": "<table><tr><td rowspan=\"2\">Dataset Length Eval. Metrics</td><td colspan=\"2\">96</td><td colspan=\"2\">192</td><td colspan=\"6\">ArxivQA</td><td colspan=\"2\">Average</td></tr><tr><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td></tr><tr><td>Instruct</td><td>64.41</td><td>12.40</td><td>61.18</td><td>13.90</td><td>56.08</td><td>9.00</td><td>52.86</td><td>12.40</td><td>44.57</td><td>16.40</td><td>55.82</td><td>12.82</td></tr><tr><td>\\( Lingua_{500}\\rightarrow 64 \\)</td><td>45.88</td><td>7.90</td><td>29.91</td><td>8.20</td><td>21.39</td><td>4.20</td><td>17.68</td><td>3.40</td><td>16.17</td><td>4.20</td><td>26.21</td><td>5.58</td></tr><tr><td>\\( Lingua_{500}\\rightarrow 32 \\)</td><td>26.97</td><td>3.60</td><td>20.45</td><td>4.40</td><td>15.82</td><td>2.40</td><td>13.00</td><td>2.00</td><td>12.28</td><td>2.10</td><td>17.70</td><td>2.90</td></tr><tr><td>\\( Ours_{500}\\rightarrow 16 \\)</td><td>60.49</td><td>25.60</td><td>47.65</td><td>16.50</td><td>35.50</td><td>8.40</td><td>30.00</td><td>7.10</td><td>31.98</td><td>11.70</td><td>41.12</td><td>13.86</td></tr><tr><td>\\( ICAE_{500}\\rightarrow 16 \\)</td><td>57.95</td><td>23.20</td><td>44.41</td><td>15.10</td><td>33.88</td><td>7.70</td><td>28.06</td><td>7.20</td><td>29.72</td><td>10.60</td><td>38.80</td><td>12.76</td></tr><tr><td>Absolute Δ</td><td>2.54</td><td>2.40</td><td>3.23</td><td>1.40</td><td>1.62</td><td>0.70</td><td>1.94</td><td>0.10</td><td>2.25</td><td>1.10</td><td>2.31</td><td>1.10</td></tr><tr><td>Relative Δ</td><td>4.38%</td><td>10.34%</td><td>7.29%</td><td>9.27%</td><td>4.78%</td><td>9.09%</td><td>6.91%</td><td>1.38%</td><td>7.59%</td><td>10.37%</td><td>5.97%</td><td>8.62%</td></tr><tr><td>\\( Ours_{500}\\rightarrow 1 \\)</td><td>42.91</td><td>10.30</td><td>32.88</td><td>6.50</td><td>25.82</td><td>3.80</td><td>23.01</td><td>3.60</td><td>24.29</td><td>6.50</td><td>29.78</td><td>6.14</td></tr><tr><td>\\( ICAE_{500}\\rightarrow 1 \\)</td><td>26.87</td><td>3.50</td><td>21.76</td><td>2.30</td><td>20.34</td><td>2.20</td><td>17.35</td><td>1.70</td><td>17.72</td><td>3.60</td><td>20.81</td><td>2.66</td></tr><tr><td>Absolute Δ</td><td>16.04</td><td>6.80</td><td>11.12</td><td>4.20</td><td>5.47</td><td>1.60</td><td>5.65</td><td>1.90</td><td>6.56</td><td>2.90</td><td>8.97</td><td>3.48</td></tr><tr><td>Relative Δ</td><td>59.71%</td><td>194.28%</td><td>51.12%</td><td>182.60%</td><td>26.89%</td><td>72.72%</td><td>32.58%</td><td>111.76%</td><td>37.03%</td><td>80.55%</td><td>43.11%</td><td>130.82%</td></tr></table>",
875
+ "bbox": [
876
+ 122,
877
+ 80,
878
+ 877,
879
+ 219
880
+ ],
881
+ "page_idx": 5
882
+ },
883
+ {
884
+ "type": "table",
885
+ "img_path": "images/cc5fa791a578323e2ae825394c6a4bfb83a521d1a052ff2d4dfd9a1c0206774f.jpg",
886
+ "table_caption": [
887
+ "Table 3: In-domain QA evaluation results on the ArxivQA dataset with strictly unseen contexts. Length indicates the length of the context to be compressed. F1 and Exact Match (EM) scores are reported across varying context lengths. \"Instruct\" means full-context performance with instructions, while Lingua denotes LLMLingua-2 baseline. Performance deltas $(\\Delta)$ between $500\\mathrm{x}$ Compressor (Ours) and ICAE baseline are shown in green (improvements) and red (decreases)."
888
+ ],
889
+ "table_footnote": [],
890
+ "table_body": "<table><tr><td rowspan=\"2\">Dataset Length Eval. Metrics</td><td colspan=\"2\">RE 39 (553)</td><td colspan=\"2\">NaturalQ 258 (2721)</td><td colspan=\"2\">RACE 369 (824)</td><td colspan=\"2\">TextbookQA 729 (963)</td><td colspan=\"2\">TriviaQA 955 (2124)</td><td colspan=\"2\">Average</td></tr><tr><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td></tr><tr><td>Instruct</td><td>71.16</td><td>52.98</td><td>66.30</td><td>39.92</td><td>39.55</td><td>13.94</td><td>45.15</td><td>19.49</td><td>63.80</td><td>41.65</td><td>57.19</td><td>33.60</td></tr><tr><td>\\( Lingua_{500}\\rightarrow{16} \\)</td><td>57.78</td><td>41.58</td><td>40.46</td><td>23.15</td><td>12.58</td><td>5.93</td><td>29.38</td><td>19.16</td><td>56.06</td><td>46.26</td><td>39.25</td><td>27.22</td></tr><tr><td>\\( Lingua_{500}\\rightarrow{8} \\)</td><td>37.85</td><td>23.98</td><td>32.71</td><td>17.94</td><td>9.11</td><td>3.11</td><td>28.67</td><td>17.29</td><td>56.15</td><td>45.58</td><td>32.90</td><td>21.58</td></tr><tr><td>\\( Ours_{500}\\rightarrow{16} \\)</td><td>68.47</td><td>50.06</td><td>45.53</td><td>26.40</td><td>25.53</td><td>10.97</td><td>30.31</td><td>18.36</td><td>43.76</td><td>33.25</td><td>42.72</td><td>27.81</td></tr><tr><td>\\( ICAE_{500}\\rightarrow{16} \\)</td><td>66.03</td><td>44.60</td><td>46.10</td><td>25.18</td><td>24.32</td><td>9.64</td><td>13.27</td><td>4.79</td><td>28.36</td><td>16.78</td><td>35.62</td><td>20.20</td></tr><tr><td>Absolute \\( \\Delta \\)</td><td>2.43</td><td>5.46</td><td>0.57</td><td>1.21</td><td>1.20</td><td>1.33</td><td>17.03</td><td>13.57</td><td>15.40</td><td>16.46</td><td>7.10</td><td>7.61</td></tr><tr><td>Relative \\( \\Delta \\)</td><td>3.69%</td><td>12.24%</td><td>1.24%</td><td>4.82%</td><td>4.95%</td><td>13.84%</td><td>128.30%</td><td>283.33%</td><td>54.32%</td><td>98.08%</td><td>19.94%</td><td>37.66%</td></tr><tr><td>\\( Ours_{500}\\rightarrow{1} \\)</td><td>63.49</td><td>45.72</td><td>41.36</td><td>22.38</td><td>21.37</td><td>7.41</td><td>30.67</td><td>16.83</td><td>54.61</td><td>42.40</td><td>42.30</td><td>26.95</td></tr><tr><td>\\( ICAE_{500}\\rightarrow{1} \\)</td><td>44.49</td><td>27.27</td><td>26.65</td><td>11.21</td><td>14.24</td><td>4.45</td><td>6.19</td><td>2.19</td><td>10.24</td><td>6.37</td><td>20.36</td><td>10.30</td></tr><tr><td>Absolute \\( \\Delta \\)</td><td>18.99</td><td>18.45</td><td>14.70</td><td>11.16</td><td>7.12</td><td>2.96</td><td>24.48</td><td>14.63</td><td>44.37</td><td>36.03</td><td>21.93</td><td>16.64</td></tr><tr><td>Relative \\( \\Delta \\)</td><td>42.70%</td><td>67.66%</td><td>55.17%</td><td>99.52%</td><td>50.03%</td><td>66.41%</td><td>395.24%</td><td>666.14%</td><td>432.97%</td><td>565.32%</td><td>107.70%</td><td>161.58%</td></tr></table>",
891
+ "bbox": [
892
+ 115,
893
+ 312,
894
+ 882,
895
+ 451
896
+ ],
897
+ "page_idx": 5
898
+ },
899
+ {
900
+ "type": "text",
901
+ "text": "Table 4: Cross-domain QA evaluation results on diverse QA datasets including RelationExtraction (RE), NaturalQuestions (NaturalQ), RACE, TextbookQA, and TriviaQA. Context lengths are reported as average (maximum) token counts.",
902
+ "bbox": [
903
+ 112,
904
+ 461,
905
+ 882,
906
+ 504
907
+ ],
908
+ "page_idx": 5
909
+ },
910
+ {
911
+ "type": "text",
912
+ "text": "and compression ratio. While lower compression ratios and shorter contexts generally yield better performance, some cross-domain datasets exhibit interesting results. Notably, TextbookQA and TriviaQA show improved performance at $500\\rightarrow 1$ compared to $500\\rightarrow 16$ compression.",
913
+ "bbox": [
914
+ 112,
915
+ 530,
916
+ 487,
917
+ 626
918
+ ],
919
+ "page_idx": 5
920
+ },
921
+ {
922
+ "type": "text",
923
+ "text": "Generalization Capability The model's generalization ability is clearly shown in its cross-domain performance. The performance gap between $500\\mathrm{x}$ Compressor and ICAE widens at higher compression ratios across all QA datasets. Cross-domain improvements are consistently larger than in-domain gains, reaching up to $107.70\\%$ relative improvement in the average F1 at $500\\rightarrow 1$ compression.",
924
+ "bbox": [
925
+ 112,
926
+ 629,
927
+ 489,
928
+ 772
929
+ ],
930
+ "page_idx": 5
931
+ },
932
+ {
933
+ "type": "text",
934
+ "text": "Scalability and Robustness Context length influences performance differently across domains. For in-domain tasks, performance decreases stabilizes with increasing context length. In cross-domain scenarios, longer average context lengths relate with larger improvements, as proved by TextbookQA and TriviaQA showing substantial gains of $395.24\\%$ and $432.97\\%$ relative improvement respectively over ICAE at $500 \\rightarrow 1$ compression.",
935
+ "bbox": [
936
+ 112,
937
+ 776,
938
+ 489,
939
+ 921
940
+ ],
941
+ "page_idx": 5
942
+ },
943
+ {
944
+ "type": "text",
945
+ "text": "$500\\mathrm{x}$ Compressor demonstrates better robustness to increased compression ratios as well, with average F1 scores decreasing by only 0.42 points from $500 \\rightarrow 16$ to $500 \\rightarrow 1$ , compared to ICAE's 15.26-point reduction.",
946
+ "bbox": [
947
+ 507,
948
+ 530,
949
+ 884,
950
+ 609
951
+ ],
952
+ "page_idx": 5
953
+ },
954
+ {
955
+ "type": "text",
956
+ "text": "Stability The performance improvements exhibit consistency in both absolute and relative gains across different compression ratios and datasets. This stability is observed in F1 and EM improvements and remains in both in-domain and cross-domain evaluations.",
957
+ "bbox": [
958
+ 507,
959
+ 611,
960
+ 884,
961
+ 706
962
+ ],
963
+ "page_idx": 5
964
+ },
965
+ {
966
+ "type": "text",
967
+ "text": "4.4 Case Study",
968
+ "text_level": 1,
969
+ "bbox": [
970
+ 507,
971
+ 721,
972
+ 645,
973
+ 736
974
+ ],
975
+ "page_idx": 5
976
+ },
977
+ {
978
+ "type": "text",
979
+ "text": "Table 5 presents comparative examples of text regeneration and QA pairs among $500\\mathrm{x}$ Compressor and baselines. The examples verify previous findings that baselines demonstrate higher rates of information loss, mistakes, and hallucinations compared to $500\\mathrm{x}$ Compressor in both text regeneration and QA tasks.",
980
+ "bbox": [
981
+ 507,
982
+ 743,
983
+ 884,
984
+ 854
985
+ ],
986
+ "page_idx": 5
987
+ },
988
+ {
989
+ "type": "text",
990
+ "text": "While regeneration quality typically relates with QA performance, the examples reveal exceptions where accurate text regeneration leads to incorrect QA responses, and imperfect regeneration pro",
991
+ "bbox": [
992
+ 507,
993
+ 857,
994
+ 882,
995
+ 921
996
+ ],
997
+ "page_idx": 5
998
+ },
999
+ {
1000
+ "type": "page_number",
1001
+ "text": "25086",
1002
+ "bbox": [
1003
+ 475,
1004
+ 927,
1005
+ 524,
1006
+ 940
1007
+ ],
1008
+ "page_idx": 5
1009
+ },
1010
+ {
1011
+ "type": "table",
1012
+ "img_path": "images/54558f68995ca9616de91546e3ef766ea5163b2829638ef363be04fe542cc80c.jpg",
1013
+ "table_caption": [],
1014
+ "table_footnote": [],
1015
+ "table_body": "<table><tr><td>Original</td><td>500xCompressor (ours)</td><td>ICAE (baseline)</td></tr><tr><td>We show that every reciprocity sheaf gives rise to a cycle (pre)module in the sense of Rost over a perfect field. Over a perfect field of positive characteristic, we show that the first cohomology group of a logarithmic de Rham-Witt sheaf has a partial cycle module structure. As a consequence, we show that Kato complexes of logarithmic de Rham-Witt sheaves satisfy functoriality properties similar to Rost&#x27;s cycle complexes.</td><td>We show that every reciprocity sheaf gives rise to a cycle (pre)module in the sense of Rost over a perfect field. Over a perfect field of positive characteristic, we show that the first co-homology group of a logarithmic de Rham-Witt cycle module has a partial cycle structure. As a consequence, we show that Kato modules of logarithmic de Rham-Witt complexes satisfy functorial properties similar to Rost&#x27;s cycle complexes.</td><td>We show that every sheaf reciprocity gives rise to a cycle (pre)module in the sense of Rost over a perfect field Rost cycle. In the perfect field case, we show that over a positive characteristic field, the first logarithmic de Rham-Witt sheaf cohomology group of a Witt log-Witt cycle has a partial cycle module structure decomposition. As a consequence, we show that Kato&#x27;s cycle complexes of logarithmic de Rham-Witt sheaves satisfy functoriality properties similar to Rost cycle complexes.</td></tr><tr><td>Q: What type of sheaf gives rise to a cycle premodule?A: Every reciprocity sheaf.</td><td>A: a reciprocity sheaf</td><td>A: a reciprocity sheaf of (logarithmic) differential forms</td></tr><tr><td>Q: Over what type of field do we show that Kato complexes satisfy functoriality properties?A: Over a perfect field of positive characteristic.</td><td>A: perfect fields of positive characteristic</td><td>A: a perfect field of positive characteristic zero</td></tr><tr><td>Q: What is the structure of the first cohomology group of a logarithmic de Rham-Witt sheaf?A: a partial cycle module structure</td><td>A: a partial cycle complex</td><td>A: a partial Kato cycle complex</td></tr><tr><td>Q: What type of complexes satisfy functoriality properties similar to Rost&#x27;s cycle complexes?A: Kato complexes of logarithmic de Rham-Witt sheaves</td><td>A: Kato-Witt cycle complexes of logarithmic de Rham-Witt sheaves</td><td>A: Kato&#x27;s complexes of logarithmic de Rham-Witt sheaves</td></tr></table>",
1016
+ "bbox": [
1017
+ 115,
1018
+ 80,
1019
+ 878,
1020
+ 461
1021
+ ],
1022
+ "page_idx": 6
1023
+ },
1024
+ {
1025
+ "type": "text",
1026
+ "text": "duces correct answers. This observation highlights the relationship between compression quality and task performance.",
1027
+ "bbox": [
1028
+ 112,
1029
+ 567,
1030
+ 487,
1031
+ 617
1032
+ ],
1033
+ "page_idx": 6
1034
+ },
1035
+ {
1036
+ "type": "text",
1037
+ "text": "4.5 Ablation Studies",
1038
+ "text_level": 1,
1039
+ "bbox": [
1040
+ 112,
1041
+ 627,
1042
+ 290,
1043
+ 642
1044
+ ],
1045
+ "page_idx": 6
1046
+ },
1047
+ {
1048
+ "type": "text",
1049
+ "text": "The performance of compression models is influenced by several variables, including the compression method (ICAE or 500xCompressor), task type (in-domain or cross-domain datasets), context length (length of text to be compressed), and compression ratio (number of compression tokens). The influences of these variables have been discussed in Section 4.3. To further analyze the influence of semantic information, we compare performance on original ArxivCorpus texts versus semantically meaningless texts.",
1050
+ "bbox": [
1051
+ 112,
1052
+ 648,
1053
+ 489,
1054
+ 822
1055
+ ],
1056
+ "page_idx": 6
1057
+ },
1058
+ {
1059
+ "type": "text",
1060
+ "text": "Table 6 demonstrates that semantic understanding improves compression quality, with $500\\mathrm{x}$ Compressor achieving $99.48\\%$ BLEU score on ArxivCorpus texts compared to $11.77\\%$ on random texts. The performance gap keeps in semantically meaningless scenarios, where $500\\mathrm{x}$ Compressor main",
1061
+ "bbox": [
1062
+ 112,
1063
+ 825,
1064
+ 489,
1065
+ 921
1066
+ ],
1067
+ "page_idx": 6
1068
+ },
1069
+ {
1070
+ "type": "table",
1071
+ "img_path": "images/a669e3f3d259583159eb04f9a711392c9193000d4f9e5badb94fdc223c25c2d3.jpg",
1072
+ "table_caption": [
1073
+ "Table 5: Examples of regenerated texts and QA pairs provided by $500\\mathrm{x}$ Compressor and ICAE. 96 tokens of the original text were compressed, which were then used for QA. Differences between the gold standard and the output include mistakes (red, containing incorrect text), information loss (yellow and italic, missing some text), hallucinations (blue, including text not present in the original), and paraphrasing (green, rephrasing the original text)."
1074
+ ],
1075
+ "table_footnote": [],
1076
+ "table_body": "<table><tr><td rowspan=\"2\">Dataset Length</td><td colspan=\"2\">Arxiv</td><td colspan=\"2\">Random</td></tr><tr><td>96</td><td>192</td><td>96</td><td>192</td></tr><tr><td>Ours500→16</td><td>99.48</td><td>96.21</td><td>11.77</td><td>2.78</td></tr><tr><td>ICAE500→16</td><td>81.85</td><td>55.90</td><td>2.06</td><td>0.84</td></tr><tr><td>Absolute Δ</td><td>17.62</td><td>40.31</td><td>9.70</td><td>1.93</td></tr></table>",
1077
+ "bbox": [
1078
+ 539,
1079
+ 564,
1080
+ 852,
1081
+ 627
1082
+ ],
1083
+ "page_idx": 6
1084
+ },
1085
+ {
1086
+ "type": "text",
1087
+ "text": "Table 6: Text regeneration performance (BLEU scores) on semantic (ArxivCorpus) and non-semantic (Random) texts. Random texts are generated by increasing each token ID from the ArxivCorpus texts by one position. Arxiv is ArxivCorpus.",
1088
+ "bbox": [
1089
+ 507,
1090
+ 638,
1091
+ 882,
1092
+ 709
1093
+ ],
1094
+ "page_idx": 6
1095
+ },
1096
+ {
1097
+ "type": "text",
1098
+ "text": "tains an advantage over ICAE (11.77% versus 2.06%), showing robust and improved preservation of both semantic and format-related information.",
1099
+ "bbox": [
1100
+ 507,
1101
+ 736,
1102
+ 880,
1103
+ 783
1104
+ ],
1105
+ "page_idx": 6
1106
+ },
1107
+ {
1108
+ "type": "text",
1109
+ "text": "5 Discussions",
1110
+ "text_level": 1,
1111
+ "bbox": [
1112
+ 507,
1113
+ 797,
1114
+ 643,
1115
+ 812
1116
+ ],
1117
+ "page_idx": 6
1118
+ },
1119
+ {
1120
+ "type": "text",
1121
+ "text": "The differences between 500xCompressor and ICAE could be better understood by comparing them to Prompt Tuning (Lester et al., 2021) and Prefix Tuning (Li and Liang, 2021). In Prompt Tuning, prefixed special tokens are trained to guide the model in completing specific downstream tasks.",
1122
+ "bbox": [
1123
+ 507,
1124
+ 825,
1125
+ 884,
1126
+ 921
1127
+ ],
1128
+ "page_idx": 6
1129
+ },
1130
+ {
1131
+ "type": "page_number",
1132
+ "text": "25087",
1133
+ "bbox": [
1134
+ 475,
1135
+ 927,
1136
+ 524,
1137
+ 940
1138
+ ],
1139
+ "page_idx": 6
1140
+ },
1141
+ {
1142
+ "type": "text",
1143
+ "text": "Similarly, ICAE compresses contexts into prefixed special tokens for downstream tasks. Unlike Prompt Tuning, Prefix Tuning also trains the KV values associated with the prefixed special tokens. 500xCompressor, akin to Prefix Tuning, compresses texts into the KV values of prefixed special tokens. In Prompt Tuning or Prefix Tuning, the prefixed special tokens (and their KV values) only save the instruction for the downstream task. However, in ICAE and 500xCompressor, these tokens compress detailed information within the context in addition to the instruction.",
1144
+ "bbox": [
1145
+ 112,
1146
+ 84,
1147
+ 492,
1148
+ 275
1149
+ ],
1150
+ "page_idx": 7
1151
+ },
1152
+ {
1153
+ "type": "text",
1154
+ "text": "There are three ways to understand the compressed tokens generated from natural language tokens: as memory, a new information format, and a new LLM language. Ge et al. (2024) associated text compression with working memory, viewing compressed tokens as an efficient way for LLMs to store knowledge. Cheng et al. (2024) interpreted text compression as a new information format, where compressed tokens, combined with natural language tokens, provide more information and have higher information richness. Jiang et al. (2023a) treated the compressed prompt as a new language for LLM. There are three elements that define a language: saving information, transmitting information, and adaptive evaluation. The compressed tokens could regenerate the original text, indicating that the information has been saved. Furthermore, these tokens can be used for downstream tasks and answer related questions, demonstrating their ability to transfer information. The ability of the compression models to process unseen texts further shows their generalization ability and adaptability. These characteristics make compressed tokens an efficient new language for LLMs.",
1155
+ "bbox": [
1156
+ 115,
1157
+ 279,
1158
+ 490,
1159
+ 665
1160
+ ],
1161
+ "page_idx": 7
1162
+ },
1163
+ {
1164
+ "type": "text",
1165
+ "text": "6 Related Work",
1166
+ "text_level": 1,
1167
+ "bbox": [
1168
+ 112,
1169
+ 681,
1170
+ 270,
1171
+ 697
1172
+ ],
1173
+ "page_idx": 7
1174
+ },
1175
+ {
1176
+ "type": "text",
1177
+ "text": "This work is related to prompt compression. There are two main approaches to reducing the number of prompt tokens: hard prompts and soft prompts.",
1178
+ "bbox": [
1179
+ 112,
1180
+ 709,
1181
+ 487,
1182
+ 758
1183
+ ],
1184
+ "page_idx": 7
1185
+ },
1186
+ {
1187
+ "type": "text",
1188
+ "text": "Hard prompt methods identify and delete low-information content in the prompt. Li et al. (2023) proposed SelectiveSentence in 2023, which identifies rich-information content at the sentence or word level. Later, Jiang et al. (2023a) proved that LLMs could understand incomplete words or sentences, leading to the development of LLMLingua, LongLLMingua, and LLMLingua-2, which delete useless tokens even if fluency is interrupted (Jiang et al., 2023a,b; Hu et al., 2022).",
1189
+ "bbox": [
1190
+ 112,
1191
+ 760,
1192
+ 489,
1193
+ 921
1194
+ ],
1195
+ "page_idx": 7
1196
+ },
1197
+ {
1198
+ "type": "text",
1199
+ "text": "Soft prompt methods compress natural language tokens into a small number of special tokens. Wingate et al. (2022) optimized the difference between the answers generated by the original prompt and the compressed prompt, but this method lacked generalization, requiring training for each new prompt. Mu et al. (2024) solved this by proposing GIST tokens, but their limitations included the need for fine-tuning the original LLM and the short length of prompts to be compressed, typically less than thirty tokens. ICAE solved these issues by pretraining the compression model and avoiding additional parameters during decoding, allowing compression of texts up to around 500 tokens without changing the original LLM (Ge et al., 2024). However, the maximum compression ratio of ICAE is about $15\\mathrm{x}$ . To increase the text length for compression, Chevalier et al. (2023) proposed AutoCompressor, which progressively compresses the prompt but, like GIST tokens, is limited to finetuned LLMs and a complex training process. Other works analyze text compression within paragraphs (Ren et al., 2023). Soft prompts are also applied in RAG through xRAG and COCOM (Cheng et al., 2024; Lau et al., 2024).",
1200
+ "bbox": [
1201
+ 507,
1202
+ 84,
1203
+ 885,
1204
+ 486
1205
+ ],
1206
+ "page_idx": 7
1207
+ },
1208
+ {
1209
+ "type": "text",
1210
+ "text": "It is worth noting that $500\\mathrm{x}$ Compressor is fundamentally a prompt compression method based on the soft prompt rather than a KV cache compression approach. While the KV values of compression tokens are used for inference, they remain unchanged throughout the process, with all compression processes done on the input prompts.",
1211
+ "bbox": [
1212
+ 507,
1213
+ 489,
1214
+ 885,
1215
+ 604
1216
+ ],
1217
+ "page_idx": 7
1218
+ },
1219
+ {
1220
+ "type": "text",
1221
+ "text": "7 Conclusions",
1222
+ "text_level": 1,
1223
+ "bbox": [
1224
+ 509,
1225
+ 626,
1226
+ 650,
1227
+ 642
1228
+ ],
1229
+ "page_idx": 7
1230
+ },
1231
+ {
1232
+ "type": "text",
1233
+ "text": "This paper proposes 500xCompressor, a prompt compression method capable of compressing any text and all tokens within it. 500xCompressor achieves a high compression ratio while retaining most capabilities of non-compressed prompts. This method proves that current prompts are highly compressible, developing further direction in compression applications.",
1234
+ "bbox": [
1235
+ 507,
1236
+ 659,
1237
+ 884,
1238
+ 789
1239
+ ],
1240
+ "page_idx": 7
1241
+ },
1242
+ {
1243
+ "type": "text",
1244
+ "text": "Future work would involve applications such as in-context learning, personalization, and RAG. 500xCompressor has shown good generalization ability on cross-domain tasks and increasing the size and diversity of the training data is expected to make 500xCompressor be able to finish more tasks (for example, tasks requiring flexible formats and long outputs) and achieve better results.",
1245
+ "bbox": [
1246
+ 507,
1247
+ 793,
1248
+ 885,
1249
+ 921
1250
+ ],
1251
+ "page_idx": 7
1252
+ },
1253
+ {
1254
+ "type": "page_number",
1255
+ "text": "25088",
1256
+ "bbox": [
1257
+ 475,
1258
+ 927,
1259
+ 524,
1260
+ 941
1261
+ ],
1262
+ "page_idx": 7
1263
+ },
1264
+ {
1265
+ "type": "text",
1266
+ "text": "Limitations",
1267
+ "text_level": 1,
1268
+ "bbox": [
1269
+ 114,
1270
+ 84,
1271
+ 220,
1272
+ 99
1273
+ ],
1274
+ "page_idx": 8
1275
+ },
1276
+ {
1277
+ "type": "text",
1278
+ "text": "A consideration in our work was the careful selection of training data to avoid copyright issues. We chose to use the ArxivCorpus rather than datasets like Pile, as Arxiv papers are officially uploaded with clear copyright through Cornell University. Future development should also carefully consider copyright when using different datasets.",
1279
+ "bbox": [
1280
+ 112,
1281
+ 109,
1282
+ 489,
1283
+ 221
1284
+ ],
1285
+ "page_idx": 8
1286
+ },
1287
+ {
1288
+ "type": "text",
1289
+ "text": "Ethics Statement",
1290
+ "text_level": 1,
1291
+ "bbox": [
1292
+ 114,
1293
+ 234,
1294
+ 265,
1295
+ 248
1296
+ ],
1297
+ "page_idx": 8
1298
+ },
1299
+ {
1300
+ "type": "text",
1301
+ "text": "No ethical approval was required for this study.",
1302
+ "bbox": [
1303
+ 114,
1304
+ 259,
1305
+ 465,
1306
+ 275
1307
+ ],
1308
+ "page_idx": 8
1309
+ },
1310
+ {
1311
+ "type": "text",
1312
+ "text": "Availability Statement",
1313
+ "text_level": 1,
1314
+ "bbox": [
1315
+ 114,
1316
+ 287,
1317
+ 310,
1318
+ 303
1319
+ ],
1320
+ "page_idx": 8
1321
+ },
1322
+ {
1323
+ "type": "text",
1324
+ "text": "The codes related to this study have been uploaded to the open source community at https://github.com/ZongqianLi/500xCompressor.",
1325
+ "bbox": [
1326
+ 112,
1327
+ 313,
1328
+ 487,
1329
+ 361
1330
+ ],
1331
+ "page_idx": 8
1332
+ },
1333
+ {
1334
+ "type": "text",
1335
+ "text": "References",
1336
+ "text_level": 1,
1337
+ "bbox": [
1338
+ 114,
1339
+ 388,
1340
+ 213,
1341
+ 401
1342
+ ],
1343
+ "page_idx": 8
1344
+ },
1345
+ {
1346
+ "type": "list",
1347
+ "sub_type": "ref_text",
1348
+ "list_items": [
1349
+ "Xin Cheng, Xun Wang, Xingxing Zhang, Tao Ge, Si-Qing Chen, Furu Wei, Huishuai Zhang, and Dongyan Zhao. 2024. xrag: Extreme context compression for retrieval-augmented generation with one token. arXiv preprint arXiv:2405.13792.",
1350
+ "Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. In The 2023 Conference on Empirical Methods in Natural Language Processing.",
1351
+ "Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eun-sol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP.",
1352
+ "Tao Ge, Hu Jing, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. 2024. In-context autoencoder for context compression in a large language model. In The Twelfth International Conference on Learning Representations.",
1353
+ "Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, ..., and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783.",
1354
+ "Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.",
1355
+ "Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023a. LLMLingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro"
1356
+ ],
1357
+ "bbox": [
1358
+ 115,
1359
+ 411,
1360
+ 489,
1361
+ 920
1362
+ ],
1363
+ "page_idx": 8
1364
+ },
1365
+ {
1366
+ "type": "list",
1367
+ "sub_type": "ref_text",
1368
+ "list_items": [
1369
+ "cessing, pages 13358-13376, Singapore. Association for Computational Linguistics.",
1370
+ "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023b. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. arXiv preprint arXiv:2310.06839.",
1371
+ "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.",
1372
+ "Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5376-5384.",
1373
+ "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466.",
1374
+ "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.",
1375
+ "Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
1376
+ "Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333-342, Vancouver, Canada. Association for Computational Linguistics.",
1377
+ "Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online. Association for Computational Linguistics."
1378
+ ],
1379
+ "bbox": [
1380
+ 510,
1381
+ 85,
1382
+ 884,
1383
+ 920
1384
+ ],
1385
+ "page_idx": 8
1386
+ },
1387
+ {
1388
+ "type": "page_number",
1389
+ "text": "25089",
1390
+ "bbox": [
1391
+ 475,
1392
+ 927,
1393
+ 524,
1394
+ 940
1395
+ ],
1396
+ "page_idx": 8
1397
+ },
1398
+ {
1399
+ "type": "text",
1400
+ "text": "Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin. 2023. Compressing context to enhance inference efficiency of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6342-6353, Singapore. Association for Computational Linguistics.",
1401
+ "bbox": [
1402
+ 115,
1403
+ 85,
1404
+ 489,
1405
+ 165
1406
+ ],
1407
+ "page_idx": 9
1408
+ },
1409
+ {
1410
+ "type": "text",
1411
+ "text": "Jesse Mu, Xiang Lisa Li, and Noah Goodman. 2024. Learning to compress prompts with gist tokens. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Red Hook, NY, USA. Curran Associates Inc.",
1412
+ "bbox": [
1413
+ 114,
1414
+ 173,
1415
+ 489,
1416
+ 240
1417
+ ],
1418
+ "page_idx": 9
1419
+ },
1420
+ {
1421
+ "type": "text",
1422
+ "text": "Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Ruhle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, and Dongmei Zhang. 2024. LLMingua-2: Data distillation for efficient and faithful task-agnostic prompt compression. In Findings of the Association for Computational Linguistics: ACL 2024, pages 963–981, Bangkok, Thailand. Association for Computational Linguistics.",
1423
+ "bbox": [
1424
+ 114,
1425
+ 249,
1426
+ 489,
1427
+ 368
1428
+ ],
1429
+ "page_idx": 9
1430
+ },
1431
+ {
1432
+ "type": "text",
1433
+ "text": "David Rau, Shuai Wang, Hervé Déjean, and Stephane Clinchant. 2024. Context embeddings for efficient answer generation in rag. arXiv preprint arXiv:2407.09252.",
1434
+ "bbox": [
1435
+ 114,
1436
+ 376,
1437
+ 489,
1438
+ 429
1439
+ ],
1440
+ "page_idx": 9
1441
+ },
1442
+ {
1443
+ "type": "text",
1444
+ "text": "Siyu Ren, Qi Jia, and Kenny Q. Zhu. 2023. Context compression for auto-regressive transformers with sentinel tokens. In The 2023 Conference on Empirical Methods in Natural Language Processing.",
1445
+ "bbox": [
1446
+ 114,
1447
+ 439,
1448
+ 489,
1449
+ 494
1450
+ ],
1451
+ "page_idx": 9
1452
+ },
1453
+ {
1454
+ "type": "text",
1455
+ "text": "David Wingate, Mohammad Shoeybi, and Taylor Sorensen. 2022. Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5621-5634, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
1456
+ "bbox": [
1457
+ 114,
1458
+ 502,
1459
+ 489,
1460
+ 595
1461
+ ],
1462
+ "page_idx": 9
1463
+ },
1464
+ {
1465
+ "type": "text",
1466
+ "text": "A Appendices",
1467
+ "text_level": 1,
1468
+ "bbox": [
1469
+ 509,
1470
+ 84,
1471
+ 650,
1472
+ 99
1473
+ ],
1474
+ "page_idx": 9
1475
+ },
1476
+ {
1477
+ "type": "text",
1478
+ "text": "A.1 Model Training",
1479
+ "text_level": 1,
1480
+ "bbox": [
1481
+ 509,
1482
+ 109,
1483
+ 684,
1484
+ 124
1485
+ ],
1486
+ "page_idx": 9
1487
+ },
1488
+ {
1489
+ "type": "text",
1490
+ "text": "The training parameters for $500\\mathrm{x}$ Compressor and ICAE are detailed in Table 7. Main packages are torch 2.3.1 and transformers 4.42.3, and the full environment can be got from Section 7. The evaluation losses for both models are illustrated in Figure 4. All models have successfully converged, with $500\\mathrm{x}$ Compressor demonstrating better performance compared to ICAE, as indicated by the evaluation loss.",
1491
+ "bbox": [
1492
+ 507,
1493
+ 130,
1494
+ 884,
1495
+ 273
1496
+ ],
1497
+ "page_idx": 9
1498
+ },
1499
+ {
1500
+ "type": "table",
1501
+ "img_path": "images/7308fd111239e1dea2efec782544d675630fb146066c635a069a84eecb0f409e.jpg",
1502
+ "table_caption": [],
1503
+ "table_footnote": [],
1504
+ "table_body": "<table><tr><td></td><td colspan=\"2\">Pretraining</td><td colspan=\"2\">Finetuning</td></tr><tr><td></td><td>500→16</td><td>500→1</td><td>500→16</td><td>500→1</td></tr><tr><td>Total steps</td><td>42000</td><td>103800</td><td>20000</td><td>10000</td></tr><tr><td>Warm-up steps</td><td>300</td><td>300</td><td>300</td><td>300</td></tr><tr><td>Learning rate</td><td>1e-4</td><td>1e-4</td><td>5e-5</td><td>5e-5</td></tr><tr><td>Batch size</td><td>4</td><td>4</td><td>4</td><td>4</td></tr><tr><td>Optimizer</td><td>AdamW</td><td>AdamW</td><td>AdamW</td><td>AdamW</td></tr></table>",
1505
+ "bbox": [
1506
+ 526,
1507
+ 282,
1508
+ 867,
1509
+ 354
1510
+ ],
1511
+ "page_idx": 9
1512
+ },
1513
+ {
1514
+ "type": "text",
1515
+ "text": "Table 7: Training parameters for ${500}\\mathrm{x}$ Compressor and ICAE.",
1516
+ "bbox": [
1517
+ 507,
1518
+ 363,
1519
+ 882,
1520
+ 390
1521
+ ],
1522
+ "page_idx": 9
1523
+ },
1524
+ {
1525
+ "type": "text",
1526
+ "text": "A.2 ArxivCorpus and ArxivQA Dataset",
1527
+ "text_level": 1,
1528
+ "bbox": [
1529
+ 509,
1530
+ 422,
1531
+ 836,
1532
+ 437
1533
+ ],
1534
+ "page_idx": 9
1535
+ },
1536
+ {
1537
+ "type": "text",
1538
+ "text": "Source of Arxiv abstracts in the ArxivCorpus: https://www.kaggle.com/datasets/ Cornell-University/arxiv",
1539
+ "bbox": [
1540
+ 509,
1541
+ 443,
1542
+ 850,
1543
+ 489
1544
+ ],
1545
+ "page_idx": 9
1546
+ },
1547
+ {
1548
+ "type": "text",
1549
+ "text": "The detailed information for ArxivCorpus and the ArxivQA dataset is shown in Table 8.",
1550
+ "bbox": [
1551
+ 509,
1552
+ 491,
1553
+ 880,
1554
+ 521
1555
+ ],
1556
+ "page_idx": 9
1557
+ },
1558
+ {
1559
+ "type": "text",
1560
+ "text": "The prompt to generate the QA pairs:",
1561
+ "bbox": [
1562
+ 527,
1563
+ 523,
1564
+ 806,
1565
+ 539
1566
+ ],
1567
+ "page_idx": 9
1568
+ },
1569
+ {
1570
+ "type": "code",
1571
+ "sub_type": "code",
1572
+ "code_caption": [],
1573
+ "code_body": "context: {truncated_context} \ntask: design the {number} best extractive question answering pairs for the context to test information loss \nrequirement: the question should be direct; the question should try to use the same words in the context; the answer should directly appear in the context (a span of the context); the answer should not be in the question; just output the results in format and do not output other words \noutput json format: {{\"id\":1, \"question\": \"\", \"answer\": \"\", {\"id\":2, \"question\": \"\", \"answer\": \"\", ...]}",
1574
+ "guess_lang": "jsonl",
1575
+ "bbox": [
1576
+ 547,
1577
+ 550,
1578
+ 840,
1579
+ 693
1580
+ ],
1581
+ "page_idx": 9
1582
+ },
1583
+ {
1584
+ "type": "text",
1585
+ "text": "A.3 Question Answering",
1586
+ "text_level": 1,
1587
+ "bbox": [
1588
+ 509,
1589
+ 714,
1590
+ 721,
1591
+ 728
1592
+ ],
1593
+ "page_idx": 9
1594
+ },
1595
+ {
1596
+ "type": "text",
1597
+ "text": "Prompt for QA (Instruct):",
1598
+ "bbox": [
1599
+ 509,
1600
+ 734,
1601
+ 702,
1602
+ 749
1603
+ ],
1604
+ "page_idx": 9
1605
+ },
1606
+ {
1607
+ "type": "text",
1608
+ "text": "Please finish the extractive question answering task. Just output the answer. Context: {context} Question: {question} Answer:",
1609
+ "bbox": [
1610
+ 546,
1611
+ 760,
1612
+ 823,
1613
+ 799
1614
+ ],
1615
+ "page_idx": 9
1616
+ },
1617
+ {
1618
+ "type": "text",
1619
+ "text": "This paper and codes are helped with ChatGPT and Claude.",
1620
+ "bbox": [
1621
+ 507,
1622
+ 821,
1623
+ 882,
1624
+ 851
1625
+ ],
1626
+ "page_idx": 9
1627
+ },
1628
+ {
1629
+ "type": "page_number",
1630
+ "text": "25090",
1631
+ "bbox": [
1632
+ 475,
1633
+ 927,
1634
+ 524,
1635
+ 940
1636
+ ],
1637
+ "page_idx": 9
1638
+ },
1639
+ {
1640
+ "type": "table",
1641
+ "img_path": "images/1ac6a1a6cab4cbc0ac6461f785508f0579200c05d55bb8c0e2301a9436855323.jpg",
1642
+ "table_caption": [],
1643
+ "table_footnote": [],
1644
+ "table_body": "<table><tr><td rowspan=\"2\"></td><td rowspan=\"2\">Train</td><td colspan=\"2\">ArxivCorpus</td><td rowspan=\"2\">Test</td><td rowspan=\"2\">Train</td><td colspan=\"2\">ArxivQA Dataset</td></tr><tr><td>Development</td><td>Test</td><td>Development</td><td>Test</td></tr><tr><td>Number of data records</td><td>2353924</td><td>3000</td><td>2500</td><td>250000</td><td>2500</td><td>5000</td><td></td></tr><tr><td>Knowledge cutoff</td><td>Pre 07/2023</td><td>01-04/2024</td><td>01-04/2024</td><td>Pre 07/2023</td><td>Pre 07/2023</td><td>01-04/2024</td><td></td></tr><tr><td>Source</td><td colspan=\"3\">Abstracts from Arxiv</td><td colspan=\"2\">Train set of ArxivCorpus</td><td colspan=\"2\">Test set of ArxivCorpus</td></tr></table>",
1645
+ "bbox": [
1646
+ 168,
1647
+ 181,
1648
+ 828,
1649
+ 233
1650
+ ],
1651
+ "page_idx": 10
1652
+ },
1653
+ {
1654
+ "type": "text",
1655
+ "text": "Table 8: Detailed information about the ArxivCorpus and the ArxivQA dataset.",
1656
+ "bbox": [
1657
+ 228,
1658
+ 242,
1659
+ 766,
1660
+ 256
1661
+ ],
1662
+ "page_idx": 10
1663
+ },
1664
+ {
1665
+ "type": "image",
1666
+ "img_path": "images/7cb29487d695366e175164d40545b5b427702c9c19330299a1cd51a9fb90a856.jpg",
1667
+ "image_caption": [],
1668
+ "image_footnote": [],
1669
+ "bbox": [
1670
+ 129,
1671
+ 466,
1672
+ 497,
1673
+ 615
1674
+ ],
1675
+ "page_idx": 10
1676
+ },
1677
+ {
1678
+ "type": "image",
1679
+ "img_path": "images/7008ad98fd17267dd92b29d19a5d5ecb1d696404012dd93e28bc076346b8035a.jpg",
1680
+ "image_caption": [],
1681
+ "image_footnote": [],
1682
+ "bbox": [
1683
+ 500,
1684
+ 466,
1685
+ 867,
1686
+ 614
1687
+ ],
1688
+ "page_idx": 10
1689
+ },
1690
+ {
1691
+ "type": "image",
1692
+ "img_path": "images/f21586cd35c00d7a2c3b3018ba052cfb80c0ee80e5de72b771b46e59bafdf4bd.jpg",
1693
+ "image_caption": [
1694
+ "Figure 4: Evaluation loss for $500\\mathrm{x}$ Compressor and ICAE during pretraining and fine-tuning."
1695
+ ],
1696
+ "image_footnote": [],
1697
+ "bbox": [
1698
+ 129,
1699
+ 632,
1700
+ 494,
1701
+ 791
1702
+ ],
1703
+ "page_idx": 10
1704
+ },
1705
+ {
1706
+ "type": "image",
1707
+ "img_path": "images/012fc9c94649c76f4d51f57df79cf4e666883a970a66ec85c677735163629845.jpg",
1708
+ "image_caption": [],
1709
+ "image_footnote": [],
1710
+ "bbox": [
1711
+ 500,
1712
+ 632,
1713
+ 865,
1714
+ 791
1715
+ ],
1716
+ "page_idx": 10
1717
+ },
1718
+ {
1719
+ "type": "page_number",
1720
+ "text": "25091",
1721
+ "bbox": [
1722
+ 475,
1723
+ 927,
1724
+ 522,
1725
+ 940
1726
+ ],
1727
+ "page_idx": 10
1728
+ }
1729
+ ]
2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/03c637d4-7f7c-452b-825b-cd37087e895b_model.json ADDED
@@ -0,0 +1,1916 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "title",
5
+ "bbox": [
6
+ 0.227,
7
+ 0.091,
8
+ 0.773,
9
+ 0.131
10
+ ],
11
+ "angle": 0,
12
+ "content": "500xCompressor: Generalized Prompt Compression for Large Language Models"
13
+ },
14
+ {
15
+ "type": "text",
16
+ "bbox": [
17
+ 0.147,
18
+ 0.159,
19
+ 0.351,
20
+ 0.207
21
+ ],
22
+ "angle": 0,
23
+ "content": "Zongqian Li \nUniversity of Cambridge \nz1510@cam.ac.uk"
24
+ },
25
+ {
26
+ "type": "text",
27
+ "bbox": [
28
+ 0.397,
29
+ 0.159,
30
+ 0.603,
31
+ 0.208
32
+ ],
33
+ "angle": 0,
34
+ "content": "Yixuan Su \nUniversity of Cambridge \nys484@cam.ac.uk"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.649,
40
+ 0.159,
41
+ 0.853,
42
+ 0.207
43
+ ],
44
+ "angle": 0,
45
+ "content": "Nigel Collier \nUniversity of Cambridge \nnhc30@cam.ac.uk"
46
+ },
47
+ {
48
+ "type": "title",
49
+ "bbox": [
50
+ 0.261,
51
+ 0.261,
52
+ 0.341,
53
+ 0.277
54
+ ],
55
+ "angle": 0,
56
+ "content": "Abstract"
57
+ },
58
+ {
59
+ "type": "text",
60
+ "bbox": [
61
+ 0.142,
62
+ 0.288,
63
+ 0.461,
64
+ 0.773
65
+ ],
66
+ "angle": 0,
67
+ "content": "Prompt compression is important for large language models (LLMs) to increase inference speed, reduce costs, and improve user experience. However, current methods face challenges such as low compression ratios and potential training-test overlap during evaluation. To address these issues, we propose 500xCompressor, a method that compresses natural language contexts into a minimum of one special token and demonstrates strong generalization ability. The 500xCompressor introduces approximately \\(0.3\\%\\) additional parameters and achieves compression ratios ranging from 6x to 500x, achieving \\(27 - 90\\%\\) reduction in calculations and \\(55 - 83\\%\\) memory savings when generating 100-400 tokens for new and reused prompts at 500x compression, while retaining \\(70 - 74\\%\\) (F1) and \\(77 - 84\\%\\) (Exact Match) of the LLM capabilities compared to using non-compressed prompts. It is designed to compress any text, answer various types of questions, and can be utilized by the original LLM without requiring fine-tuning. Initially, 500xCompressor was pretrained on the ArxivCorpus, followed by fine-tuning on the ArxivQA dataset, and subsequently evaluated on strictly unseen and cross-domain question answering (QA) datasets. This study shows that KV values outperform embeddings in preserving information at high compression ratios. The highly compressive nature of natural language prompts, even for detailed information, suggests potential for future applications and the development of a new LLM language."
68
+ },
69
+ {
70
+ "type": "title",
71
+ "bbox": [
72
+ 0.115,
73
+ 0.794,
74
+ 0.26,
75
+ 0.809
76
+ ],
77
+ "angle": 0,
78
+ "content": "1 Introduction"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.113,
84
+ 0.819,
85
+ 0.49,
86
+ 0.9
87
+ ],
88
+ "angle": 0,
89
+ "content": "Long prompts present several challenges in natural language processing applications, such as decreased inference speed, higher computation cost, and a negative influence on user experience. Additionally, the context length limit restricts model"
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.136,
95
+ 0.907,
96
+ 0.478,
97
+ 0.921
98
+ ],
99
+ "angle": 0,
100
+ "content": "<https://github.com/ZongqianLi/500xCompressor>"
101
+ },
102
+ {
103
+ "type": "image",
104
+ "bbox": [
105
+ 0.526,
106
+ 0.259,
107
+ 0.864,
108
+ 0.603
109
+ ],
110
+ "angle": 0,
111
+ "content": null
112
+ },
113
+ {
114
+ "type": "image_caption",
115
+ "bbox": [
116
+ 0.509,
117
+ 0.613,
118
+ 0.885,
119
+ 0.643
120
+ ],
121
+ "angle": 0,
122
+ "content": "Figure 1: The original text is compressed by \\(500\\mathrm{x}\\) Compressor and utilized for downstream tasks."
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.508,
128
+ 0.677,
129
+ 0.883,
130
+ 0.709
131
+ ],
132
+ "angle": 0,
133
+ "content": "performance and application scenarios, creating a strong demand for prompt length reduction."
134
+ },
135
+ {
136
+ "type": "text",
137
+ "bbox": [
138
+ 0.508,
139
+ 0.713,
140
+ 0.885,
141
+ 0.922
142
+ ],
143
+ "angle": 0,
144
+ "content": "Two primary methods for prompt compression have been proposed: hard prompt and soft prompt. Hard prompt methods, such as SelectiveSentence (Li et al., 2023) and LLMLingua (Jiang et al., 2023a), eliminate low-information sentences, words, or even tokens. On the other hand, soft prompt methods, including GIST (Mu et al., 2024), AutoCompressor (Chevalier et al., 2023), and ICAE (Ge et al., 2024), compress natural language tokens into a small number of special tokens. However, these methods have problems such as low compression ratios (low efficiency improvement), unclear information loss, and potential"
145
+ },
146
+ {
147
+ "type": "page_number",
148
+ "bbox": [
149
+ 0.475,
150
+ 0.928,
151
+ 0.524,
152
+ 0.941
153
+ ],
154
+ "angle": 0,
155
+ "content": "25081"
156
+ },
157
+ {
158
+ "type": "footer",
159
+ "bbox": [
160
+ 0.084,
161
+ 0.946,
162
+ 0.914,
163
+ 0.974
164
+ ],
165
+ "angle": 0,
166
+ "content": "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 25081-25091 July 27 - August 1, 2025 ©2025 Association for Computational Linguistics"
167
+ }
168
+ ],
169
+ [
170
+ {
171
+ "type": "image",
172
+ "bbox": [
173
+ 0.125,
174
+ 0.082,
175
+ 0.873,
176
+ 0.248
177
+ ],
178
+ "angle": 0,
179
+ "content": null
180
+ },
181
+ {
182
+ "type": "image_caption",
183
+ "bbox": [
184
+ 0.143,
185
+ 0.259,
186
+ 0.851,
187
+ 0.275
188
+ ],
189
+ "angle": 0,
190
+ "content": "Figure 2: Process of pretraining (left), fine-tuning (middle), and prediction (right) with \\(500\\mathrm{x}\\) Compressor."
191
+ },
192
+ {
193
+ "type": "text",
194
+ "bbox": [
195
+ 0.113,
196
+ 0.299,
197
+ 0.49,
198
+ 0.492
199
+ ],
200
+ "angle": 0,
201
+ "content": "training-test overlap during evaluation, as discussed in detail in Section 6. For instance, ICAE achieves compression ratios no higher than \\(15\\mathrm{x}\\), and the win rate evaluation metric cannot quantitatively measure the extent of information loss during compression. Additionally, evaluation texts sourced from the Wikipedia dataset might overlap with the training data for LLaMa series models (Grattafori et al., 2024), raising questions that the generated content could be retrieved from the memory of the LLM rather than the compressed prompts."
202
+ },
203
+ {
204
+ "type": "text",
205
+ "bbox": [
206
+ 0.113,
207
+ 0.498,
208
+ 0.49,
209
+ 0.675
210
+ ],
211
+ "angle": 0,
212
+ "content": "To solve these problems, we propose 500xCompressor, illustrated in Figure 1. This method compresses prompts of approximately 500 tokens into a minimum of one token, allowing the compressed tokens to regenerate the original texts or be used for QA. Although trained on the ArxivCorpus and ArxivQA dataset, 500xCompressor could generalize to answer other types of questions. Analysis demonstrates that detailed information, such as proper nouns, special names, and numbers, could be accurately compressed and retrieved."
213
+ },
214
+ {
215
+ "type": "text",
216
+ "bbox": [
217
+ 0.113,
218
+ 0.681,
219
+ 0.49,
220
+ 0.922
221
+ ],
222
+ "angle": 0,
223
+ "content": "500xCompressor retains the advantages of previous methods and introduces several additional characteristics. Similar to previous soft prompt methods, 500xCompressor is generalized and nonselective, capable of compressing unseen texts across various topics for QA, demonstrating its generalization ability. Unlike selective compression methods, 500xCompressor aims to regenerate the entire original text, ensuring that all tokens in the original text contribute to the compression tokens. Moreover, the compressed prompts could be used to regenerate original texts or for QA without requiring fine-tuning of the LLM, preserving the LLM's original capabilities and improving the convenience of using compressed tokens."
224
+ },
225
+ {
226
+ "type": "text",
227
+ "bbox": [
228
+ 0.509,
229
+ 0.3,
230
+ 0.885,
231
+ 0.331
232
+ ],
233
+ "angle": 0,
234
+ "content": "In addition to these existing advantages, we provide contributions in three main areas:"
235
+ },
236
+ {
237
+ "type": "text",
238
+ "bbox": [
239
+ 0.509,
240
+ 0.34,
241
+ 0.885,
242
+ 0.452
243
+ ],
244
+ "angle": 0,
245
+ "content": "- High Compression Ratio: This study evaluates the compression model with one and sixteen tokens to compress up to 500 tokens, achieving compression ratios up to \\(500\\mathrm{x}\\). These ratios significantly outperform previous studies, which reported ratios of less than \\(50\\mathrm{x}\\), fully testing the upper limit of prompt compression."
246
+ },
247
+ {
248
+ "type": "text",
249
+ "bbox": [
250
+ 0.509,
251
+ 0.454,
252
+ 0.885,
253
+ 0.549
254
+ ],
255
+ "angle": 0,
256
+ "content": "- Strict Unseen Evaluation: Using Arxiv abstracts published post-January 2024 ensures evaluation on content unseen by both the LLM and compression model, verifying that outputs are from compressed prompts rather than pre-existing model knowledge."
257
+ },
258
+ {
259
+ "type": "text",
260
+ "bbox": [
261
+ 0.509,
262
+ 0.551,
263
+ 0.884,
264
+ 0.646
265
+ ],
266
+ "angle": 0,
267
+ "content": "- Quantitative Analysis of Information Loss: Through extractive QA with context-span answers, we realize direct quantitative comparison between compressed and uncompressed performance, providing precise measurements of compression-resulting information loss."
268
+ },
269
+ {
270
+ "type": "list",
271
+ "bbox": [
272
+ 0.509,
273
+ 0.34,
274
+ 0.885,
275
+ 0.646
276
+ ],
277
+ "angle": 0,
278
+ "content": null
279
+ },
280
+ {
281
+ "type": "text",
282
+ "bbox": [
283
+ 0.508,
284
+ 0.656,
285
+ 0.885,
286
+ 0.816
287
+ ],
288
+ "angle": 0,
289
+ "content": "In this paper, the design of \\(500\\mathrm{x}\\) Compressor is first introduced in Section 2, including how to train and use the compression model. After that, Section 3 explains the training and evaluation datasets, the baselines, and the evaluation metrics. The evaluation results for regeneration and QA are presented in Section 4, with ablation studies analyzing the variables influencing the compression models. This is followed by discussions in Section 5, and finally, the sections on related work and conclusions."
290
+ },
291
+ {
292
+ "type": "title",
293
+ "bbox": [
294
+ 0.509,
295
+ 0.828,
296
+ 0.621,
297
+ 0.843
298
+ ],
299
+ "angle": 0,
300
+ "content": "2 Methods"
301
+ },
302
+ {
303
+ "type": "title",
304
+ "bbox": [
305
+ 0.509,
306
+ 0.853,
307
+ 0.625,
308
+ 0.868
309
+ ],
310
+ "angle": 0,
311
+ "content": "2.1 Training"
312
+ },
313
+ {
314
+ "type": "text",
315
+ "bbox": [
316
+ 0.508,
317
+ 0.874,
318
+ 0.882,
319
+ 0.922
320
+ ],
321
+ "angle": 0,
322
+ "content": "The training process for the compression model is illustrated in Figure 2, including both pretraining and fine-tuning stages. The compression model"
323
+ },
324
+ {
325
+ "type": "page_number",
326
+ "bbox": [
327
+ 0.477,
328
+ 0.928,
329
+ 0.526,
330
+ 0.941
331
+ ],
332
+ "angle": 0,
333
+ "content": "25082"
334
+ }
335
+ ],
336
+ [
337
+ {
338
+ "type": "text",
339
+ "bbox": [
340
+ 0.113,
341
+ 0.085,
342
+ 0.49,
343
+ 0.276
344
+ ],
345
+ "angle": 0,
346
+ "content": "comprises two components: an encoder and a decoder, which is similar to an autoencoder and comparable to ICAE. The encoder is the frozen LLM \\(\\Theta_{\\mathrm{LLM}}\\) with trainable LoRA parameters \\(\\Theta_{\\mathrm{Lora}}\\) (Hu et al., 2022), while the decoder is the original frozen LLM \\(\\Theta_{\\mathrm{LLM}}\\). The encoder receives the original text tokens \\(\\mathbf{T} = (t_1, t_2, \\dots, t_l)\\) and the compression tokens \\(\\mathbf{C} = (c_1, c_2, \\dots, c_k)\\). Through layers, the information in the text is saved into the compression tokens, whose KV values in each layer of the LLM \\(\\mathbf{H}_{\\mathbf{C}}\\) are output and passed to the decoder."
347
+ },
348
+ {
349
+ "type": "text",
350
+ "bbox": [
351
+ 0.113,
352
+ 0.279,
353
+ 0.49,
354
+ 0.44
355
+ ],
356
+ "angle": 0,
357
+ "content": "During pretraining, the inputs of the decoder are the KV values of the compression tokens from the encoder, the beginning of sequence token, and the original text tokens \\((\\mathbf{H}_{\\mathbf{C}},[\\mathbf{BOS}],\\mathbf{T})\\). The LLM is trained to regenerate the original text based on the KV values, using the end of sequence token [EOS] to denote when to stop. The cross-entropy loss between the output of the decoder and the original text is calculated and used to train the LoRA parameters in the encoder:"
358
+ },
359
+ {
360
+ "type": "equation",
361
+ "bbox": [
362
+ 0.127,
363
+ 0.463,
364
+ 0.488,
365
+ 0.509
366
+ ],
367
+ "angle": 0,
368
+ "content": "\\[\n\\mathcal {L} _ {\\mathrm {P}} = - \\sum_ {i = 1} ^ {l} \\log P \\left(t _ {i} \\mid \\mathbf {H} _ {\\mathbf {C}}, [ \\mathbf {B O S} ], t _ {1: i - 1}; \\Theta_ {\\mathrm {L L M}}, \\Theta_ {\\mathrm {L o r a}}\\right) \\tag {1}\n\\]"
369
+ },
370
+ {
371
+ "type": "text",
372
+ "bbox": [
373
+ 0.113,
374
+ 0.512,
375
+ 0.49,
376
+ 0.624
377
+ ],
378
+ "angle": 0,
379
+ "content": "For instruction fine-tuning, the process is similar to pretraining. However, instead of the original texts, the decoder is provided with questions \\(\\mathbf{Q} = (q_{1}, q_{2}, \\ldots, q_{m})\\) and answers \\(\\mathbf{A} = (a_{1}, a_{2}, \\ldots, a_{n})\\), which are used to train the LLM to retrieve information from the KV values of the compression tokens and generate answers:"
380
+ },
381
+ {
382
+ "type": "equation",
383
+ "bbox": [
384
+ 0.124,
385
+ 0.647,
386
+ 0.489,
387
+ 0.682
388
+ ],
389
+ "angle": 0,
390
+ "content": "\\[\n\\mathcal {L} _ {\\mathrm {F}} = - \\sum_ {j = 1} ^ {n} \\log P \\left(a _ {j} \\mid \\mathbf {H} _ {\\mathbf {C}}, q _ {1: m}, a _ {1: j - 1}; \\Theta_ {\\text {L L M}}, \\Theta_ {\\text {L o r a}}\\right) \\tag {2}\n\\]"
391
+ },
392
+ {
393
+ "type": "text",
394
+ "bbox": [
395
+ 0.113,
396
+ 0.696,
397
+ 0.489,
398
+ 0.774
399
+ ],
400
+ "angle": 0,
401
+ "content": "The training process ensures no training-test overlap, as the original LLM parameters in both the encoder and decoder remain unchanged, and no additional parameters are introduced in the decoder. Thus, no information is saved in the decoder."
402
+ },
403
+ {
404
+ "type": "text",
405
+ "bbox": [
406
+ 0.113,
407
+ 0.777,
408
+ 0.49,
409
+ 0.921
410
+ ],
411
+ "angle": 0,
412
+ "content": "Main differences between 500xCompressor and ICAE: (1) The input of ICAE decoder is the output embeddings for the compression tokens, whereas 500xCompressor uses the KV values for the compression tokens. KV values could save more information and do not increase inference time. (2) In addition, this paper uses the [BOS] token to guide the LLM to regenerate the compressed texts, while ICAE creates a trainable new token."
413
+ },
414
+ {
415
+ "type": "title",
416
+ "bbox": [
417
+ 0.51,
418
+ 0.085,
419
+ 0.64,
420
+ 0.1
421
+ ],
422
+ "angle": 0,
423
+ "content": "2.2 Prediction"
424
+ },
425
+ {
426
+ "type": "text",
427
+ "bbox": [
428
+ 0.508,
429
+ 0.107,
430
+ 0.886,
431
+ 0.236
432
+ ],
433
+ "angle": 0,
434
+ "content": "During prediction, all encoder and decoder parameters are frozen. The original text is fed into the encoder, which saves the information into compression tokens. These compression tokens' KV values are then input into the decoder, which regenerates the compressed text when guided by the [BOS] token or generates an answer based on a given question:"
435
+ },
436
+ {
437
+ "type": "equation",
438
+ "bbox": [
439
+ 0.548,
440
+ 0.258,
441
+ 0.884,
442
+ 0.281
443
+ ],
444
+ "angle": 0,
445
+ "content": "\\[\n\\hat {t} _ {i} = \\underset {\\hat {t} _ {i}} {\\arg \\max } P (\\hat {t} _ {i} | \\mathbf {H} _ {\\mathbf {C}}, [ \\mathbf {B O S} ], \\hat {t} _ {1: i - 1}; \\boldsymbol {\\Theta} _ {\\mathrm {L L M}}) \\tag {3}\n\\]"
446
+ },
447
+ {
448
+ "type": "equation",
449
+ "bbox": [
450
+ 0.55,
451
+ 0.298,
452
+ 0.884,
453
+ 0.32
454
+ ],
455
+ "angle": 0,
456
+ "content": "\\[\n\\hat {a} _ {j} = \\arg \\max _ {\\hat {a} _ {j}} P (\\hat {a} _ {j} | \\mathbf {H} _ {\\mathbf {C}}, q _ {1: m}, \\hat {a} _ {1: j - 1}; \\boldsymbol {\\Theta} _ {\\mathrm {L L M}}) \\tag {4}\n\\]"
457
+ },
458
+ {
459
+ "type": "text",
460
+ "bbox": [
461
+ 0.508,
462
+ 0.329,
463
+ 0.883,
464
+ 0.375
465
+ ],
466
+ "angle": 0,
467
+ "content": "where \\(\\hat{t}_i\\) denotes the \\(i\\)-th token in the regenerated text, and \\(\\hat{a}_j\\) indicates the \\(j\\)-th token in the generated answer."
468
+ },
469
+ {
470
+ "type": "text",
471
+ "bbox": [
472
+ 0.508,
473
+ 0.378,
474
+ 0.884,
475
+ 0.505
476
+ ],
477
+ "angle": 0,
478
+ "content": "By replacing the original text tokens with compressed tokens, the speed of answering questions is increased. This is because, in inference, each token in the question or generated answer must attend to the previous tokens. Replacing a large number of original text tokens with a small number of compressed tokens reduces computational needs."
479
+ },
480
+ {
481
+ "type": "title",
482
+ "bbox": [
483
+ 0.509,
484
+ 0.52,
485
+ 0.657,
486
+ 0.536
487
+ ],
488
+ "angle": 0,
489
+ "content": "3 Experiments"
490
+ },
491
+ {
492
+ "type": "title",
493
+ "bbox": [
494
+ 0.509,
495
+ 0.546,
496
+ 0.625,
497
+ 0.56
498
+ ],
499
+ "angle": 0,
500
+ "content": "3.1 Datasets"
501
+ },
502
+ {
503
+ "type": "text",
504
+ "bbox": [
505
+ 0.508,
506
+ 0.568,
507
+ 0.884,
508
+ 0.793
509
+ ],
510
+ "angle": 0,
511
+ "content": "The ArxivCorpus was used to pretrain 500xCompressor, and the compression model was then finetuned on the ArxivQA dataset. After that, six benchmarks with different context lengths were used to evaluate the compression models for various abilities: ArxivQA and TriviaQA (Joshi et al., 2017) for information extraction, RelationExtraction (Levy et al., 2017) for relation extraction, NaturalQuestions (Kwiatkowski et al., 2019) and TextbookQA (Kembhavi et al., 2017) for reading comprehension, and RACE (Lai et al., 2017) for reasoning. Among these datasets, ArxivQA is introduced in this paper, while the others are classical QA datasets from MRQA (Fisch et al., 2019)."
512
+ },
513
+ {
514
+ "type": "text",
515
+ "bbox": [
516
+ 0.508,
517
+ 0.794,
518
+ 0.884,
519
+ 0.92
520
+ ],
521
+ "angle": 0,
522
+ "content": "The ArxivCorpus comprises Arxiv paper abstracts published before April 2024, with pre-July 2023 papers forming the training set and post-January 2024 papers forming the development and test sets. Test set abstracts are selected by token lengths (96, 192, 288, 384, and 480) to evaluate the regeneration performance of prompt compression methods."
523
+ },
524
+ {
525
+ "type": "page_number",
526
+ "bbox": [
527
+ 0.476,
528
+ 0.928,
529
+ 0.526,
530
+ 0.941
531
+ ],
532
+ "angle": 0,
533
+ "content": "25083"
534
+ }
535
+ ],
536
+ [
537
+ {
538
+ "type": "text",
539
+ "bbox": [
540
+ 0.113,
541
+ 0.085,
542
+ 0.489,
543
+ 0.228
544
+ ],
545
+ "angle": 0,
546
+ "content": "The ArxivCorpus was chosen for several reasons: (1) High-quality academic content with clear publication timestamps, (2) Verified temporal separation from LLaMa-3's March 2023 knowledge cutoff, ensuring the regenerated texts are based on the compressed prompts rather than the memory of the LLM, and (3) Official distribution through Cornell University, addressing copyright problems that influence datasets like Pile."
547
+ },
548
+ {
549
+ "type": "text",
550
+ "bbox": [
551
+ 0.113,
552
+ 0.23,
553
+ 0.489,
554
+ 0.357
555
+ ],
556
+ "angle": 0,
557
+ "content": "The ArxivQA dataset (more than 250k QA pairs), derived from ArxivCorpus using LLaMa-3-70b-Instruct, contains extractive QA pairs with the number of QA pairs increasing proportionally with abstract length (starting with 5 pairs per 96-token abstract). Training and development QA pairs are generated from the training set abstracts, while test set pairs come from test set abstracts."
558
+ },
559
+ {
560
+ "type": "text",
561
+ "bbox": [
562
+ 0.113,
563
+ 0.359,
564
+ 0.489,
565
+ 0.472
566
+ ],
567
+ "angle": 0,
568
+ "content": "ArxivQA offers three main advantages: (1) Guaranteed LLM-unseen test contexts avoiding training-test overlap, (2) Extractive QA format allowing quantitative evaluation of information loss, and (3) Domain-specific questions generated by LLaMa-3-70b-Instruct based on ArxivCorpus ensuring both difficulty and quality."
569
+ },
570
+ {
571
+ "type": "title",
572
+ "bbox": [
573
+ 0.114,
574
+ 0.482,
575
+ 0.388,
576
+ 0.497
577
+ ],
578
+ "angle": 0,
579
+ "content": "3.2 Baselines and Gold Standard"
580
+ },
581
+ {
582
+ "type": "text",
583
+ "bbox": [
584
+ 0.113,
585
+ 0.504,
586
+ 0.489,
587
+ 0.648
588
+ ],
589
+ "angle": 0,
590
+ "content": "Two baseline approaches are chosen: LLMLingua2 (Pan et al., 2024), exemplifying hard prompt compression through selective token elimination, and ICAE, utilizing soft prompt compression via continuous vectors. Both methods process the compressed context alongside questions for LLM inference. The gold standard provides the LLM with the complete combination of instruction, uncompressed context, and question."
591
+ },
592
+ {
593
+ "type": "title",
594
+ "bbox": [
595
+ 0.114,
596
+ 0.66,
597
+ 0.319,
598
+ 0.673
599
+ ],
600
+ "angle": 0,
601
+ "content": "3.3 Evaluation Metrics"
602
+ },
603
+ {
604
+ "type": "text",
605
+ "bbox": [
606
+ 0.113,
607
+ 0.68,
608
+ 0.489,
609
+ 0.824
610
+ ],
611
+ "angle": 0,
612
+ "content": "For evaluating text regeneration, Rouge-2-F (based on bigram overlap) and BLEU (measuring n-gram precision) scores are used to assess the similarity between regenerated and original texts. For extractive QA tasks, F1 score (the harmonic mean of precision and recall) and Exact Match (EM) are used to measure answer accuracy. Higher scores in all these metrics indicate better performance, with a maximum value of \\(100\\%\\)."
613
+ },
614
+ {
615
+ "type": "title",
616
+ "bbox": [
617
+ 0.114,
618
+ 0.837,
619
+ 0.219,
620
+ 0.85
621
+ ],
622
+ "angle": 0,
623
+ "content": "3.4 Models"
624
+ },
625
+ {
626
+ "type": "text",
627
+ "bbox": [
628
+ 0.113,
629
+ 0.857,
630
+ 0.489,
631
+ 0.921
632
+ ],
633
+ "angle": 0,
634
+ "content": "The encoder of 500xCompressor is frozen LLaMA-3-8B-Instruct with trainable LoRA parameters (rank=64) and the decoder is frozen LLaMA-3-8B-Instruct (Grattafori et al., 2024)."
635
+ },
636
+ {
637
+ "type": "title",
638
+ "bbox": [
639
+ 0.51,
640
+ 0.084,
641
+ 0.608,
642
+ 0.099
643
+ ],
644
+ "angle": 0,
645
+ "content": "4 Results"
646
+ },
647
+ {
648
+ "type": "title",
649
+ "bbox": [
650
+ 0.509,
651
+ 0.111,
652
+ 0.744,
653
+ 0.127
654
+ ],
655
+ "angle": 0,
656
+ "content": "4.1 Efficiency Improvement"
657
+ },
658
+ {
659
+ "type": "text",
660
+ "bbox": [
661
+ 0.507,
662
+ 0.133,
663
+ 0.884,
664
+ 0.438
665
+ ],
666
+ "angle": 0,
667
+ "content": "Table 1 demonstrates the importance of prompt compression for efficiency gains, showing improvements in both first-time processing (new prompt) and cached processing scenarios (reused prompt) at \\(500\\mathrm{x}\\) compression. For new prompts, while compression introduces a minimal computational cost \\((+0.4\\%)\\), the savings increase substantially with output length, reaching \\(49.10\\%\\) reduction in computation at 400 tokens. Reused prompts demonstrate immediate computational benefits, achieving \\(90.64\\%\\) reduction at 100 tokens output length. For memory usage of KV cache, reused prompts achieve \\(99.80\\%\\) initial memory reduction, and both prompts show consistent memory savings from \\(83.16\\%\\) to \\(55.33\\%\\) as output length increases to 400 tokens. Given that real-world applications often involve batch processing and repeated access to the same content, these efficiency gains make \\(500\\mathrm{x}\\) Compressor valuable in real-world scenarios."
668
+ },
669
+ {
670
+ "type": "table",
671
+ "bbox": [
672
+ 0.533,
673
+ 0.45,
674
+ 0.86,
675
+ 0.55
676
+ ],
677
+ "angle": 0,
678
+ "content": "<table><tr><td rowspan=\"2\">Output Length</td><td colspan=\"2\">Calculations</td><td colspan=\"2\">Memory</td></tr><tr><td>New prompt</td><td>Reused prompt</td><td>New prompt</td><td>Reused prompt</td></tr><tr><td>0</td><td>+0.4</td><td>0</td><td>+0.2</td><td>-99.80</td></tr><tr><td>100</td><td>-27.39</td><td>-90.64</td><td>-83.16</td><td>-83.16</td></tr><tr><td>200</td><td>-40.47</td><td>-83.09</td><td>-71.28</td><td>-71.28</td></tr><tr><td>300</td><td>-46.56</td><td>-76.71</td><td>-62.37</td><td>-62.37</td></tr><tr><td>400</td><td>-49.10</td><td>-71.23</td><td>-55.33</td><td>-55.33</td></tr></table>"
679
+ },
680
+ {
681
+ "type": "table_caption",
682
+ "bbox": [
683
+ 0.508,
684
+ 0.559,
685
+ 0.883,
686
+ 0.644
687
+ ],
688
+ "angle": 0,
689
+ "content": "Table 1: Computation and memory savings (in percentage) achieved by \\(500\\mathrm{x}\\) Compressor for different output lengths (token) at \\(500\\rightarrow 1\\) compression. A new prompt refers to first-time processing of input, while a reused prompt denotes repeated processing that can utilize cached intermediate results."
690
+ },
691
+ {
692
+ "type": "title",
693
+ "bbox": [
694
+ 0.509,
695
+ 0.675,
696
+ 0.699,
697
+ 0.69
698
+ ],
699
+ "angle": 0,
700
+ "content": "4.2 Text Regeneration"
701
+ },
702
+ {
703
+ "type": "text",
704
+ "bbox": [
705
+ 0.507,
706
+ 0.696,
707
+ 0.884,
708
+ 0.824
709
+ ],
710
+ "angle": 0,
711
+ "content": "The text regeneration capabilities of different prompt compression methods are evaluated on the strictly unseen test set of ArxivCorpus. Table 2 shows the performance across varying context lengths and compression ratios, measured by Rouge-2-F and BLEU scores. Our analysis examines the overall advantages, influencing variables, and stability of the improvements."
712
+ },
713
+ {
714
+ "type": "text",
715
+ "bbox": [
716
+ 0.508,
717
+ 0.826,
718
+ 0.884,
719
+ 0.922
720
+ ],
721
+ "angle": 0,
722
+ "content": "500xCompressor demonstrates consistent better performance over ICAE across all test scenarios. Our method outperforms ICAE on both Rouge-2-F and BLEU metrics for all context lengths and compression ratios tested. Quantitatively, the average improvement for Rouge-2-F scores increases by"
723
+ },
724
+ {
725
+ "type": "page_number",
726
+ "bbox": [
727
+ 0.477,
728
+ 0.928,
729
+ 0.526,
730
+ 0.941
731
+ ],
732
+ "angle": 0,
733
+ "content": "25084"
734
+ }
735
+ ],
736
+ [
737
+ {
738
+ "type": "table",
739
+ "bbox": [
740
+ 0.128,
741
+ 0.081,
742
+ 0.872,
743
+ 0.191
744
+ ],
745
+ "angle": 0,
746
+ "content": "<table><tr><td rowspan=\"2\">Dataset Length Eval. Metrics</td><td colspan=\"2\">96</td><td colspan=\"2\">192</td><td colspan=\"2\">ArxivCorpus</td><td colspan=\"2\">384</td><td colspan=\"2\">480</td><td colspan=\"2\">Average</td></tr><tr><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td></tr><tr><td>Ours500→16</td><td>99.53</td><td>99.48</td><td>96.49</td><td>96.21</td><td>82.31</td><td>80.93</td><td>55.36</td><td>53.50</td><td>31.55</td><td>32.19</td><td>73.05</td><td>72.46</td></tr><tr><td>ICAE500→16</td><td>83.52</td><td>81.85</td><td>58.21</td><td>55.90</td><td>40.96</td><td>38.37</td><td>34.28</td><td>32.03</td><td>29.71</td><td>29.61</td><td>49.33</td><td>47.55</td></tr><tr><td>Absolute Δ</td><td>16.02</td><td>17.62</td><td>38.28</td><td>40.31</td><td>41.35</td><td>42.56</td><td>21.07</td><td>21.46</td><td>1.84</td><td>2.58</td><td>23.71</td><td>24.91</td></tr><tr><td>Relative Δ</td><td>19.19%</td><td>21.53%</td><td>65.76%</td><td>72.12%</td><td>100.96%</td><td>110.92%</td><td>61.47%</td><td>66.98%</td><td>6.20%</td><td>8.71%</td><td>48.07%</td><td>52.38%</td></tr><tr><td>Ours500→1</td><td>53.49</td><td>49.77</td><td>29.73</td><td>26.53</td><td>22.15</td><td>19.15</td><td>20.61</td><td>17.91</td><td>18.85</td><td>18.80</td><td>28.97</td><td>26.43</td></tr><tr><td>ICAE500→1</td><td>30.29</td><td>24.18</td><td>18.21</td><td>13.94</td><td>13.89</td><td>10.36</td><td>13.36</td><td>9.92</td><td>12.28</td><td>11.68</td><td>17.61</td><td>14.02</td></tr><tr><td>Absolute Δ</td><td>23.19</td><td>25.59</td><td>11.51</td><td>12.59</td><td>8.25</td><td>8.79</td><td>7.25</td><td>7.99</td><td>6.56</td><td>7.11</td><td>11.36</td><td>12.41</td></tr><tr><td>Relative Δ</td><td>76.58%</td><td>105.84%</td><td>63.22%</td><td>90.33%</td><td>59.45%</td><td>84.81%</td><td>54.30%</td><td>80.48%</td><td>53.44%</td><td>60.86%</td><td>64.50%</td><td>88.56%</td></tr></table>"
747
+ },
748
+ {
749
+ "type": "table_caption",
750
+ "bbox": [
751
+ 0.113,
752
+ 0.201,
753
+ 0.883,
754
+ 0.287
755
+ ],
756
+ "angle": 0,
757
+ "content": "Table 2: Quantitative evaluation of text regeneration performance on the ArxivCorpus dataset with strictly unseen texts. RG and BL denote Rouge-2-F and BLEU scores respectively. The notation \\(\\mathrm{X}\\rightarrow \\mathrm{Y}\\) indicates compression from a maximum of X input tokens to Y compression tokens. Higher values between \\(500\\mathrm{xCompressor}\\) (Ours) and ICAE baseline are shown in bold and their performance differences are shown by absolute and relative \\(\\Delta\\). All improvements (shown in green) demonstrate the consistent better performance of \\(500\\mathrm{xCompressor}\\) across varying context lengths and compression ratios."
758
+ },
759
+ {
760
+ "type": "text",
761
+ "bbox": [
762
+ 0.113,
763
+ 0.312,
764
+ 0.486,
765
+ 0.342
766
+ ],
767
+ "angle": 0,
768
+ "content": "23.71 points \\((48.07\\%)\\) and 11.36 points \\((64.50\\%)\\) at \\(31.25\\mathrm{x}\\) and \\(500\\mathrm{x}\\)."
769
+ },
770
+ {
771
+ "type": "text",
772
+ "bbox": [
773
+ 0.113,
774
+ 0.344,
775
+ 0.489,
776
+ 0.536
777
+ ],
778
+ "angle": 0,
779
+ "content": "The regeneration performance exhibits clear dependencies on both compression ratio and context length. Lower compression ratios and shorter contexts yield higher text precision, with Rouge-2-F and BLEU scores consistently exceeding \\(95\\%\\) in optimal conditions. As compression ratios increase, the relative improvements over ICAE become more obvious, showing relative gains of \\(64.50\\%\\) in Rouge-2-F and \\(88.56\\%\\) in BLEU scores. While performance naturally decreases with longer contexts, the decrease rate shows a stable trend across different compression scenarios."
780
+ },
781
+ {
782
+ "type": "text",
783
+ "bbox": [
784
+ 0.113,
785
+ 0.537,
786
+ 0.489,
787
+ 0.632
788
+ ],
789
+ "angle": 0,
790
+ "content": "The method exhibits consistent stability in performance gains. Both absolute and relative improvements remain uniform across Rouge-2-F and BLEU metrics, indicating robust improvement in regeneration quality regardless of the evaluation criteria used."
791
+ },
792
+ {
793
+ "type": "text",
794
+ "bbox": [
795
+ 0.113,
796
+ 0.634,
797
+ 0.489,
798
+ 0.682
799
+ ],
800
+ "angle": 0,
801
+ "content": "While the results above demonstrate good text regeneration ability, the true performance of compression is better shown in downstream QA tasks."
802
+ },
803
+ {
804
+ "type": "title",
805
+ "bbox": [
806
+ 0.114,
807
+ 0.692,
808
+ 0.322,
809
+ 0.708
810
+ ],
811
+ "angle": 0,
812
+ "content": "4.3 Question Answering"
813
+ },
814
+ {
815
+ "type": "text",
816
+ "bbox": [
817
+ 0.113,
818
+ 0.712,
819
+ 0.489,
820
+ 0.905
821
+ ],
822
+ "angle": 0,
823
+ "content": "Figure 3 shows the performance of different prompt compression methods across varying compression ratios on QA datasets. \\(500\\mathrm{x}\\) Compressor consistently outperforms baseline methods at all compression ratios tested, confirming that KV values have advantages over embeddings (used in ICAE) in preserving information. Notably, as the compression ratio increases from \\(31.25\\mathrm{x}\\) to \\(500\\mathrm{x}\\), \\(500\\mathrm{x}\\) Compressor exhibits better performance retention, dropping from \\(74.53\\%\\) to \\(70.60\\%\\) (F1 score) and from \\(84.57\\%\\) to \\(77.92\\%\\) (Exact Match) of its uncompressed performance."
824
+ },
825
+ {
826
+ "type": "text",
827
+ "bbox": [
828
+ 0.132,
829
+ 0.906,
830
+ 0.488,
831
+ 0.921
832
+ ],
833
+ "angle": 0,
834
+ "content": "Tables 3 and 4 present evaluation results for"
835
+ },
836
+ {
837
+ "type": "image",
838
+ "bbox": [
839
+ 0.565,
840
+ 0.312,
841
+ 0.831,
842
+ 0.497
843
+ ],
844
+ "angle": 0,
845
+ "content": null
846
+ },
847
+ {
848
+ "type": "image_caption",
849
+ "bbox": [
850
+ 0.508,
851
+ 0.51,
852
+ 0.884,
853
+ 0.625
854
+ ],
855
+ "angle": 0,
856
+ "content": "Figure 3: Performance comparison of prompt compression methods on in-domain and cross-domain QA datasets across varying compression ratios. Y-axis shows F1 scores normalized by uncompressed performance, while X-axis (log scale) denotes compression ratios defined as #maximum_uncompressed_tokens/#compression_tokens. \\(\\uparrow\\) indicates higher values are better."
857
+ },
858
+ {
859
+ "type": "text",
860
+ "bbox": [
861
+ 0.508,
862
+ 0.657,
863
+ 0.883,
864
+ 0.721
865
+ ],
866
+ "angle": 0,
867
+ "content": "500xCompressor on in-domain and cross-domain QA datasets. These results are analyzed from five aspects: overall performance, influencing variables, generalization capability, scalability, and stability."
868
+ },
869
+ {
870
+ "type": "text",
871
+ "bbox": [
872
+ 0.508,
873
+ 0.725,
874
+ 0.884,
875
+ 0.886
876
+ ],
877
+ "angle": 0,
878
+ "content": "Overall Performance 500xCompressor demonstrates higher performance across nearly all context lengths, compression ratios, and both in-domain and cross-domain datasets compared to ICAE and LLMLingua-2. In cross-domain evaluation, it achieves average improvements of 7.10 F1 and 7.61 EM points (19.94% and 37.66% relative) at \\(500\\rightarrow 16\\) compression, with improvements increasing to 21.93 F1 and 16.64 EM points (107.70% and 161.58% relative) at \\(500\\rightarrow 1\\) compression."
879
+ },
880
+ {
881
+ "type": "text",
882
+ "bbox": [
883
+ 0.509,
884
+ 0.89,
885
+ 0.883,
886
+ 0.922
887
+ ],
888
+ "angle": 0,
889
+ "content": "Performance variables The effectiveness of compression is influenced by both context length"
890
+ },
891
+ {
892
+ "type": "page_number",
893
+ "bbox": [
894
+ 0.476,
895
+ 0.928,
896
+ 0.526,
897
+ 0.941
898
+ ],
899
+ "angle": 0,
900
+ "content": "25085"
901
+ }
902
+ ],
903
+ [
904
+ {
905
+ "type": "table",
906
+ "bbox": [
907
+ 0.123,
908
+ 0.081,
909
+ 0.878,
910
+ 0.221
911
+ ],
912
+ "angle": 0,
913
+ "content": "<table><tr><td rowspan=\"2\">Dataset Length Eval. Metrics</td><td colspan=\"2\">96</td><td colspan=\"2\">192</td><td colspan=\"6\">ArxivQA</td><td colspan=\"2\">Average</td></tr><tr><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td></tr><tr><td>Instruct</td><td>64.41</td><td>12.40</td><td>61.18</td><td>13.90</td><td>56.08</td><td>9.00</td><td>52.86</td><td>12.40</td><td>44.57</td><td>16.40</td><td>55.82</td><td>12.82</td></tr><tr><td>\\( Lingua_{500}\\rightarrow 64 \\)</td><td>45.88</td><td>7.90</td><td>29.91</td><td>8.20</td><td>21.39</td><td>4.20</td><td>17.68</td><td>3.40</td><td>16.17</td><td>4.20</td><td>26.21</td><td>5.58</td></tr><tr><td>\\( Lingua_{500}\\rightarrow 32 \\)</td><td>26.97</td><td>3.60</td><td>20.45</td><td>4.40</td><td>15.82</td><td>2.40</td><td>13.00</td><td>2.00</td><td>12.28</td><td>2.10</td><td>17.70</td><td>2.90</td></tr><tr><td>\\( Ours_{500}\\rightarrow 16 \\)</td><td>60.49</td><td>25.60</td><td>47.65</td><td>16.50</td><td>35.50</td><td>8.40</td><td>30.00</td><td>7.10</td><td>31.98</td><td>11.70</td><td>41.12</td><td>13.86</td></tr><tr><td>\\( ICAE_{500}\\rightarrow 16 \\)</td><td>57.95</td><td>23.20</td><td>44.41</td><td>15.10</td><td>33.88</td><td>7.70</td><td>28.06</td><td>7.20</td><td>29.72</td><td>10.60</td><td>38.80</td><td>12.76</td></tr><tr><td>Absolute Δ</td><td>2.54</td><td>2.40</td><td>3.23</td><td>1.40</td><td>1.62</td><td>0.70</td><td>1.94</td><td>0.10</td><td>2.25</td><td>1.10</td><td>2.31</td><td>1.10</td></tr><tr><td>Relative Δ</td><td>4.38%</td><td>10.34%</td><td>7.29%</td><td>9.27%</td><td>4.78%</td><td>9.09%</td><td>6.91%</td><td>1.38%</td><td>7.59%</td><td>10.37%</td><td>5.97%</td><td>8.62%</td></tr><tr><td>\\( Ours_{500}\\rightarrow 1 \\)</td><td>42.91</td><td>10.30</td><td>32.88</td><td>6.50</td><td>25.82</td><td>3.80</td><td>23.01</td><td>3.60</td><td>24.29</td><td>6.50</td><td>29.78</td><td>6.14</td></tr><tr><td>\\( ICAE_{500}\\rightarrow 1 \\)</td><td>26.87</td><td>3.50</td><td>21.76</td><td>2.30</td><td>20.34</td><td>2.20</td><td>17.35</td><td>1.70</td><td>17.72</td><td>3.60</td><td>20.81</td><td>2.66</td></tr><tr><td>Absolute Δ</td><td>16.04</td><td>6.80</td><td>11.12</td><td>4.20</td><td>5.47</td><td>1.60</td><td>5.65</td><td>1.90</td><td>6.56</td><td>2.90</td><td>8.97</td><td>3.48</td></tr><tr><td>Relative Δ</td><td>59.71%</td><td>194.28%</td><td>51.12%</td><td>182.60%</td><td>26.89%</td><td>72.72%</td><td>32.58%</td><td>111.76%</td><td>37.03%</td><td>80.55%</td><td>43.11%</td><td>130.82%</td></tr></table>"
914
+ },
915
+ {
916
+ "type": "table_caption",
917
+ "bbox": [
918
+ 0.113,
919
+ 0.23,
920
+ 0.885,
921
+ 0.302
922
+ ],
923
+ "angle": 0,
924
+ "content": "Table 3: In-domain QA evaluation results on the ArxivQA dataset with strictly unseen contexts. Length indicates the length of the context to be compressed. F1 and Exact Match (EM) scores are reported across varying context lengths. \"Instruct\" means full-context performance with instructions, while Lingua denotes LLMLingua-2 baseline. Performance deltas \\((\\Delta)\\) between \\(500\\mathrm{x}\\) Compressor (Ours) and ICAE baseline are shown in green (improvements) and red (decreases)."
925
+ },
926
+ {
927
+ "type": "table",
928
+ "bbox": [
929
+ 0.117,
930
+ 0.313,
931
+ 0.883,
932
+ 0.453
933
+ ],
934
+ "angle": 0,
935
+ "content": "<table><tr><td rowspan=\"2\">Dataset Length Eval. Metrics</td><td colspan=\"2\">RE 39 (553)</td><td colspan=\"2\">NaturalQ 258 (2721)</td><td colspan=\"2\">RACE 369 (824)</td><td colspan=\"2\">TextbookQA 729 (963)</td><td colspan=\"2\">TriviaQA 955 (2124)</td><td colspan=\"2\">Average</td></tr><tr><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td></tr><tr><td>Instruct</td><td>71.16</td><td>52.98</td><td>66.30</td><td>39.92</td><td>39.55</td><td>13.94</td><td>45.15</td><td>19.49</td><td>63.80</td><td>41.65</td><td>57.19</td><td>33.60</td></tr><tr><td>\\( Lingua_{500}\\rightarrow{16} \\)</td><td>57.78</td><td>41.58</td><td>40.46</td><td>23.15</td><td>12.58</td><td>5.93</td><td>29.38</td><td>19.16</td><td>56.06</td><td>46.26</td><td>39.25</td><td>27.22</td></tr><tr><td>\\( Lingua_{500}\\rightarrow{8} \\)</td><td>37.85</td><td>23.98</td><td>32.71</td><td>17.94</td><td>9.11</td><td>3.11</td><td>28.67</td><td>17.29</td><td>56.15</td><td>45.58</td><td>32.90</td><td>21.58</td></tr><tr><td>\\( Ours_{500}\\rightarrow{16} \\)</td><td>68.47</td><td>50.06</td><td>45.53</td><td>26.40</td><td>25.53</td><td>10.97</td><td>30.31</td><td>18.36</td><td>43.76</td><td>33.25</td><td>42.72</td><td>27.81</td></tr><tr><td>\\( ICAE_{500}\\rightarrow{16} \\)</td><td>66.03</td><td>44.60</td><td>46.10</td><td>25.18</td><td>24.32</td><td>9.64</td><td>13.27</td><td>4.79</td><td>28.36</td><td>16.78</td><td>35.62</td><td>20.20</td></tr><tr><td>Absolute \\( \\Delta \\)</td><td>2.43</td><td>5.46</td><td>0.57</td><td>1.21</td><td>1.20</td><td>1.33</td><td>17.03</td><td>13.57</td><td>15.40</td><td>16.46</td><td>7.10</td><td>7.61</td></tr><tr><td>Relative \\( \\Delta \\)</td><td>3.69%</td><td>12.24%</td><td>1.24%</td><td>4.82%</td><td>4.95%</td><td>13.84%</td><td>128.30%</td><td>283.33%</td><td>54.32%</td><td>98.08%</td><td>19.94%</td><td>37.66%</td></tr><tr><td>\\( Ours_{500}\\rightarrow{1} \\)</td><td>63.49</td><td>45.72</td><td>41.36</td><td>22.38</td><td>21.37</td><td>7.41</td><td>30.67</td><td>16.83</td><td>54.61</td><td>42.40</td><td>42.30</td><td>26.95</td></tr><tr><td>\\( ICAE_{500}\\rightarrow{1} \\)</td><td>44.49</td><td>27.27</td><td>26.65</td><td>11.21</td><td>14.24</td><td>4.45</td><td>6.19</td><td>2.19</td><td>10.24</td><td>6.37</td><td>20.36</td><td>10.30</td></tr><tr><td>Absolute \\( \\Delta \\)</td><td>18.99</td><td>18.45</td><td>14.70</td><td>11.16</td><td>7.12</td><td>2.96</td><td>24.48</td><td>14.63</td><td>44.37</td><td>36.03</td><td>21.93</td><td>16.64</td></tr><tr><td>Relative \\( \\Delta \\)</td><td>42.70%</td><td>67.66%</td><td>55.17%</td><td>99.52%</td><td>50.03%</td><td>66.41%</td><td>395.24%</td><td>666.14%</td><td>432.97%</td><td>565.32%</td><td>107.70%</td><td>161.58%</td></tr></table>"
936
+ },
937
+ {
938
+ "type": "table_caption",
939
+ "bbox": [
940
+ 0.113,
941
+ 0.462,
942
+ 0.884,
943
+ 0.505
944
+ ],
945
+ "angle": 0,
946
+ "content": "Table 4: Cross-domain QA evaluation results on diverse QA datasets including RelationExtraction (RE), NaturalQuestions (NaturalQ), RACE, TextbookQA, and TriviaQA. Context lengths are reported as average (maximum) token counts."
947
+ },
948
+ {
949
+ "type": "text",
950
+ "bbox": [
951
+ 0.113,
952
+ 0.531,
953
+ 0.489,
954
+ 0.627
955
+ ],
956
+ "angle": 0,
957
+ "content": "and compression ratio. While lower compression ratios and shorter contexts generally yield better performance, some cross-domain datasets exhibit interesting results. Notably, TextbookQA and TriviaQA show improved performance at \\(500\\rightarrow 1\\) compared to \\(500\\rightarrow 16\\) compression."
958
+ },
959
+ {
960
+ "type": "text",
961
+ "bbox": [
962
+ 0.113,
963
+ 0.63,
964
+ 0.49,
965
+ 0.773
966
+ ],
967
+ "angle": 0,
968
+ "content": "Generalization Capability The model's generalization ability is clearly shown in its cross-domain performance. The performance gap between \\(500\\mathrm{x}\\) Compressor and ICAE widens at higher compression ratios across all QA datasets. Cross-domain improvements are consistently larger than in-domain gains, reaching up to \\(107.70\\%\\) relative improvement in the average F1 at \\(500\\rightarrow 1\\) compression."
969
+ },
970
+ {
971
+ "type": "text",
972
+ "bbox": [
973
+ 0.113,
974
+ 0.777,
975
+ 0.49,
976
+ 0.922
977
+ ],
978
+ "angle": 0,
979
+ "content": "Scalability and Robustness Context length influences performance differently across domains. For in-domain tasks, performance decreases stabilizes with increasing context length. In cross-domain scenarios, longer average context lengths relate with larger improvements, as proved by TextbookQA and TriviaQA showing substantial gains of \\(395.24\\%\\) and \\(432.97\\%\\) relative improvement respectively over ICAE at \\(500 \\rightarrow 1\\) compression."
980
+ },
981
+ {
982
+ "type": "text",
983
+ "bbox": [
984
+ 0.508,
985
+ 0.531,
986
+ 0.885,
987
+ 0.61
988
+ ],
989
+ "angle": 0,
990
+ "content": "\\(500\\mathrm{x}\\) Compressor demonstrates better robustness to increased compression ratios as well, with average F1 scores decreasing by only 0.42 points from \\(500 \\rightarrow 16\\) to \\(500 \\rightarrow 1\\), compared to ICAE's 15.26-point reduction."
991
+ },
992
+ {
993
+ "type": "text",
994
+ "bbox": [
995
+ 0.508,
996
+ 0.612,
997
+ 0.885,
998
+ 0.707
999
+ ],
1000
+ "angle": 0,
1001
+ "content": "Stability The performance improvements exhibit consistency in both absolute and relative gains across different compression ratios and datasets. This stability is observed in F1 and EM improvements and remains in both in-domain and cross-domain evaluations."
1002
+ },
1003
+ {
1004
+ "type": "title",
1005
+ "bbox": [
1006
+ 0.509,
1007
+ 0.722,
1008
+ 0.646,
1009
+ 0.737
1010
+ ],
1011
+ "angle": 0,
1012
+ "content": "4.4 Case Study"
1013
+ },
1014
+ {
1015
+ "type": "text",
1016
+ "bbox": [
1017
+ 0.508,
1018
+ 0.744,
1019
+ 0.885,
1020
+ 0.856
1021
+ ],
1022
+ "angle": 0,
1023
+ "content": "Table 5 presents comparative examples of text regeneration and QA pairs among \\(500\\mathrm{x}\\) Compressor and baselines. The examples verify previous findings that baselines demonstrate higher rates of information loss, mistakes, and hallucinations compared to \\(500\\mathrm{x}\\) Compressor in both text regeneration and QA tasks."
1024
+ },
1025
+ {
1026
+ "type": "text",
1027
+ "bbox": [
1028
+ 0.508,
1029
+ 0.858,
1030
+ 0.884,
1031
+ 0.922
1032
+ ],
1033
+ "angle": 0,
1034
+ "content": "While regeneration quality typically relates with QA performance, the examples reveal exceptions where accurate text regeneration leads to incorrect QA responses, and imperfect regeneration pro"
1035
+ },
1036
+ {
1037
+ "type": "page_number",
1038
+ "bbox": [
1039
+ 0.476,
1040
+ 0.928,
1041
+ 0.526,
1042
+ 0.941
1043
+ ],
1044
+ "angle": 0,
1045
+ "content": "25086"
1046
+ }
1047
+ ],
1048
+ [
1049
+ {
1050
+ "type": "table",
1051
+ "bbox": [
1052
+ 0.117,
1053
+ 0.081,
1054
+ 0.88,
1055
+ 0.462
1056
+ ],
1057
+ "angle": 0,
1058
+ "content": "<table><tr><td>Original</td><td>500xCompressor (ours)</td><td>ICAE (baseline)</td></tr><tr><td>We show that every reciprocity sheaf gives rise to a cycle (pre)module in the sense of Rost over a perfect field. Over a perfect field of positive characteristic, we show that the first cohomology group of a logarithmic de Rham-Witt sheaf has a partial cycle module structure. As a consequence, we show that Kato complexes of logarithmic de Rham-Witt sheaves satisfy functoriality properties similar to Rost&#x27;s cycle complexes.</td><td>We show that every reciprocity sheaf gives rise to a cycle (pre)module in the sense of Rost over a perfect field. Over a perfect field of positive characteristic, we show that the first co-homology group of a logarithmic de Rham-Witt cycle module has a partial cycle structure. As a consequence, we show that Kato modules of logarithmic de Rham-Witt complexes satisfy functorial properties similar to Rost&#x27;s cycle complexes.</td><td>We show that every sheaf reciprocity gives rise to a cycle (pre)module in the sense of Rost over a perfect field Rost cycle. In the perfect field case, we show that over a positive characteristic field, the first logarithmic de Rham-Witt sheaf cohomology group of a Witt log-Witt cycle has a partial cycle module structure decomposition. As a consequence, we show that Kato&#x27;s cycle complexes of logarithmic de Rham-Witt sheaves satisfy functoriality properties similar to Rost cycle complexes.</td></tr><tr><td>Q: What type of sheaf gives rise to a cycle premodule?A: Every reciprocity sheaf.</td><td>A: a reciprocity sheaf</td><td>A: a reciprocity sheaf of (logarithmic) differential forms</td></tr><tr><td>Q: Over what type of field do we show that Kato complexes satisfy functoriality properties?A: Over a perfect field of positive characteristic.</td><td>A: perfect fields of positive characteristic</td><td>A: a perfect field of positive characteristic zero</td></tr><tr><td>Q: What is the structure of the first cohomology group of a logarithmic de Rham-Witt sheaf?A: a partial cycle module structure</td><td>A: a partial cycle complex</td><td>A: a partial Kato cycle complex</td></tr><tr><td>Q: What type of complexes satisfy functoriality properties similar to Rost&#x27;s cycle complexes?A: Kato complexes of logarithmic de Rham-Witt sheaves</td><td>A: Kato-Witt cycle complexes of logarithmic de Rham-Witt sheaves</td><td>A: Kato&#x27;s complexes of logarithmic de Rham-Witt sheaves</td></tr></table>"
1059
+ },
1060
+ {
1061
+ "type": "table_caption",
1062
+ "bbox": [
1063
+ 0.113,
1064
+ 0.471,
1065
+ 0.884,
1066
+ 0.543
1067
+ ],
1068
+ "angle": 0,
1069
+ "content": "Table 5: Examples of regenerated texts and QA pairs provided by \\(500\\mathrm{x}\\) Compressor and ICAE. 96 tokens of the original text were compressed, which were then used for QA. Differences between the gold standard and the output include mistakes (red, containing incorrect text), information loss (yellow and italic, missing some text), hallucinations (blue, including text not present in the original), and paraphrasing (green, rephrasing the original text)."
1070
+ },
1071
+ {
1072
+ "type": "text",
1073
+ "bbox": [
1074
+ 0.113,
1075
+ 0.568,
1076
+ 0.489,
1077
+ 0.618
1078
+ ],
1079
+ "angle": 0,
1080
+ "content": "duces correct answers. This observation highlights the relationship between compression quality and task performance."
1081
+ },
1082
+ {
1083
+ "type": "title",
1084
+ "bbox": [
1085
+ 0.114,
1086
+ 0.628,
1087
+ 0.292,
1088
+ 0.643
1089
+ ],
1090
+ "angle": 0,
1091
+ "content": "4.5 Ablation Studies"
1092
+ },
1093
+ {
1094
+ "type": "text",
1095
+ "bbox": [
1096
+ 0.113,
1097
+ 0.649,
1098
+ 0.49,
1099
+ 0.824
1100
+ ],
1101
+ "angle": 0,
1102
+ "content": "The performance of compression models is influenced by several variables, including the compression method (ICAE or 500xCompressor), task type (in-domain or cross-domain datasets), context length (length of text to be compressed), and compression ratio (number of compression tokens). The influences of these variables have been discussed in Section 4.3. To further analyze the influence of semantic information, we compare performance on original ArxivCorpus texts versus semantically meaningless texts."
1103
+ },
1104
+ {
1105
+ "type": "text",
1106
+ "bbox": [
1107
+ 0.113,
1108
+ 0.826,
1109
+ 0.49,
1110
+ 0.922
1111
+ ],
1112
+ "angle": 0,
1113
+ "content": "Table 6 demonstrates that semantic understanding improves compression quality, with \\(500\\mathrm{x}\\) Compressor achieving \\(99.48\\%\\) BLEU score on ArxivCorpus texts compared to \\(11.77\\%\\) on random texts. The performance gap keeps in semantically meaningless scenarios, where \\(500\\mathrm{x}\\) Compressor main"
1114
+ },
1115
+ {
1116
+ "type": "table",
1117
+ "bbox": [
1118
+ 0.541,
1119
+ 0.565,
1120
+ 0.853,
1121
+ 0.629
1122
+ ],
1123
+ "angle": 0,
1124
+ "content": "<table><tr><td rowspan=\"2\">Dataset Length</td><td colspan=\"2\">Arxiv</td><td colspan=\"2\">Random</td></tr><tr><td>96</td><td>192</td><td>96</td><td>192</td></tr><tr><td>Ours500→16</td><td>99.48</td><td>96.21</td><td>11.77</td><td>2.78</td></tr><tr><td>ICAE500→16</td><td>81.85</td><td>55.90</td><td>2.06</td><td>0.84</td></tr><tr><td>Absolute Δ</td><td>17.62</td><td>40.31</td><td>9.70</td><td>1.93</td></tr></table>"
1125
+ },
1126
+ {
1127
+ "type": "table_caption",
1128
+ "bbox": [
1129
+ 0.508,
1130
+ 0.639,
1131
+ 0.884,
1132
+ 0.71
1133
+ ],
1134
+ "angle": 0,
1135
+ "content": "Table 6: Text regeneration performance (BLEU scores) on semantic (ArxivCorpus) and non-semantic (Random) texts. Random texts are generated by increasing each token ID from the ArxivCorpus texts by one position. Arxiv is ArxivCorpus."
1136
+ },
1137
+ {
1138
+ "type": "text",
1139
+ "bbox": [
1140
+ 0.508,
1141
+ 0.737,
1142
+ 0.882,
1143
+ 0.784
1144
+ ],
1145
+ "angle": 0,
1146
+ "content": "tains an advantage over ICAE (11.77% versus 2.06%), showing robust and improved preservation of both semantic and format-related information."
1147
+ },
1148
+ {
1149
+ "type": "title",
1150
+ "bbox": [
1151
+ 0.509,
1152
+ 0.798,
1153
+ 0.644,
1154
+ 0.813
1155
+ ],
1156
+ "angle": 0,
1157
+ "content": "5 Discussions"
1158
+ },
1159
+ {
1160
+ "type": "text",
1161
+ "bbox": [
1162
+ 0.508,
1163
+ 0.826,
1164
+ 0.885,
1165
+ 0.922
1166
+ ],
1167
+ "angle": 0,
1168
+ "content": "The differences between 500xCompressor and ICAE could be better understood by comparing them to Prompt Tuning (Lester et al., 2021) and Prefix Tuning (Li and Liang, 2021). In Prompt Tuning, prefixed special tokens are trained to guide the model in completing specific downstream tasks."
1169
+ },
1170
+ {
1171
+ "type": "page_number",
1172
+ "bbox": [
1173
+ 0.476,
1174
+ 0.928,
1175
+ 0.526,
1176
+ 0.941
1177
+ ],
1178
+ "angle": 0,
1179
+ "content": "25087"
1180
+ }
1181
+ ],
1182
+ [
1183
+ {
1184
+ "type": "text",
1185
+ "bbox": [
1186
+ 0.113,
1187
+ 0.085,
1188
+ 0.493,
1189
+ 0.277
1190
+ ],
1191
+ "angle": 0,
1192
+ "content": "Similarly, ICAE compresses contexts into prefixed special tokens for downstream tasks. Unlike Prompt Tuning, Prefix Tuning also trains the KV values associated with the prefixed special tokens. 500xCompressor, akin to Prefix Tuning, compresses texts into the KV values of prefixed special tokens. In Prompt Tuning or Prefix Tuning, the prefixed special tokens (and their KV values) only save the instruction for the downstream task. However, in ICAE and 500xCompressor, these tokens compress detailed information within the context in addition to the instruction."
1193
+ },
1194
+ {
1195
+ "type": "text",
1196
+ "bbox": [
1197
+ 0.117,
1198
+ 0.28,
1199
+ 0.492,
1200
+ 0.666
1201
+ ],
1202
+ "angle": 0,
1203
+ "content": "There are three ways to understand the compressed tokens generated from natural language tokens: as memory, a new information format, and a new LLM language. Ge et al. (2024) associated text compression with working memory, viewing compressed tokens as an efficient way for LLMs to store knowledge. Cheng et al. (2024) interpreted text compression as a new information format, where compressed tokens, combined with natural language tokens, provide more information and have higher information richness. Jiang et al. (2023a) treated the compressed prompt as a new language for LLM. There are three elements that define a language: saving information, transmitting information, and adaptive evaluation. The compressed tokens could regenerate the original text, indicating that the information has been saved. Furthermore, these tokens can be used for downstream tasks and answer related questions, demonstrating their ability to transfer information. The ability of the compression models to process unseen texts further shows their generalization ability and adaptability. These characteristics make compressed tokens an efficient new language for LLMs."
1204
+ },
1205
+ {
1206
+ "type": "title",
1207
+ "bbox": [
1208
+ 0.114,
1209
+ 0.682,
1210
+ 0.271,
1211
+ 0.699
1212
+ ],
1213
+ "angle": 0,
1214
+ "content": "6 Related Work"
1215
+ },
1216
+ {
1217
+ "type": "text",
1218
+ "bbox": [
1219
+ 0.113,
1220
+ 0.711,
1221
+ 0.489,
1222
+ 0.759
1223
+ ],
1224
+ "angle": 0,
1225
+ "content": "This work is related to prompt compression. There are two main approaches to reducing the number of prompt tokens: hard prompts and soft prompts."
1226
+ },
1227
+ {
1228
+ "type": "text",
1229
+ "bbox": [
1230
+ 0.113,
1231
+ 0.761,
1232
+ 0.49,
1233
+ 0.922
1234
+ ],
1235
+ "angle": 0,
1236
+ "content": "Hard prompt methods identify and delete low-information content in the prompt. Li et al. (2023) proposed SelectiveSentence in 2023, which identifies rich-information content at the sentence or word level. Later, Jiang et al. (2023a) proved that LLMs could understand incomplete words or sentences, leading to the development of LLMLingua, LongLLMingua, and LLMLingua-2, which delete useless tokens even if fluency is interrupted (Jiang et al., 2023a,b; Hu et al., 2022)."
1237
+ },
1238
+ {
1239
+ "type": "text",
1240
+ "bbox": [
1241
+ 0.508,
1242
+ 0.085,
1243
+ 0.887,
1244
+ 0.487
1245
+ ],
1246
+ "angle": 0,
1247
+ "content": "Soft prompt methods compress natural language tokens into a small number of special tokens. Wingate et al. (2022) optimized the difference between the answers generated by the original prompt and the compressed prompt, but this method lacked generalization, requiring training for each new prompt. Mu et al. (2024) solved this by proposing GIST tokens, but their limitations included the need for fine-tuning the original LLM and the short length of prompts to be compressed, typically less than thirty tokens. ICAE solved these issues by pretraining the compression model and avoiding additional parameters during decoding, allowing compression of texts up to around 500 tokens without changing the original LLM (Ge et al., 2024). However, the maximum compression ratio of ICAE is about \\(15\\mathrm{x}\\). To increase the text length for compression, Chevalier et al. (2023) proposed AutoCompressor, which progressively compresses the prompt but, like GIST tokens, is limited to finetuned LLMs and a complex training process. Other works analyze text compression within paragraphs (Ren et al., 2023). Soft prompts are also applied in RAG through xRAG and COCOM (Cheng et al., 2024; Lau et al., 2024)."
1248
+ },
1249
+ {
1250
+ "type": "text",
1251
+ "bbox": [
1252
+ 0.508,
1253
+ 0.491,
1254
+ 0.886,
1255
+ 0.605
1256
+ ],
1257
+ "angle": 0,
1258
+ "content": "It is worth noting that \\(500\\mathrm{x}\\) Compressor is fundamentally a prompt compression method based on the soft prompt rather than a KV cache compression approach. While the KV values of compression tokens are used for inference, they remain unchanged throughout the process, with all compression processes done on the input prompts."
1259
+ },
1260
+ {
1261
+ "type": "title",
1262
+ "bbox": [
1263
+ 0.51,
1264
+ 0.627,
1265
+ 0.651,
1266
+ 0.643
1267
+ ],
1268
+ "angle": 0,
1269
+ "content": "7 Conclusions"
1270
+ },
1271
+ {
1272
+ "type": "text",
1273
+ "bbox": [
1274
+ 0.508,
1275
+ 0.661,
1276
+ 0.885,
1277
+ 0.79
1278
+ ],
1279
+ "angle": 0,
1280
+ "content": "This paper proposes 500xCompressor, a prompt compression method capable of compressing any text and all tokens within it. 500xCompressor achieves a high compression ratio while retaining most capabilities of non-compressed prompts. This method proves that current prompts are highly compressible, developing further direction in compression applications."
1281
+ },
1282
+ {
1283
+ "type": "text",
1284
+ "bbox": [
1285
+ 0.508,
1286
+ 0.794,
1287
+ 0.886,
1288
+ 0.922
1289
+ ],
1290
+ "angle": 0,
1291
+ "content": "Future work would involve applications such as in-context learning, personalization, and RAG. 500xCompressor has shown good generalization ability on cross-domain tasks and increasing the size and diversity of the training data is expected to make 500xCompressor be able to finish more tasks (for example, tasks requiring flexible formats and long outputs) and achieve better results."
1292
+ },
1293
+ {
1294
+ "type": "page_number",
1295
+ "bbox": [
1296
+ 0.477,
1297
+ 0.928,
1298
+ 0.526,
1299
+ 0.942
1300
+ ],
1301
+ "angle": 0,
1302
+ "content": "25088"
1303
+ }
1304
+ ],
1305
+ [
1306
+ {
1307
+ "type": "title",
1308
+ "bbox": [
1309
+ 0.115,
1310
+ 0.085,
1311
+ 0.221,
1312
+ 0.1
1313
+ ],
1314
+ "angle": 0,
1315
+ "content": "Limitations"
1316
+ },
1317
+ {
1318
+ "type": "text",
1319
+ "bbox": [
1320
+ 0.113,
1321
+ 0.11,
1322
+ 0.49,
1323
+ 0.222
1324
+ ],
1325
+ "angle": 0,
1326
+ "content": "A consideration in our work was the careful selection of training data to avoid copyright issues. We chose to use the ArxivCorpus rather than datasets like Pile, as Arxiv papers are officially uploaded with clear copyright through Cornell University. Future development should also carefully consider copyright when using different datasets."
1327
+ },
1328
+ {
1329
+ "type": "title",
1330
+ "bbox": [
1331
+ 0.115,
1332
+ 0.235,
1333
+ 0.267,
1334
+ 0.249
1335
+ ],
1336
+ "angle": 0,
1337
+ "content": "Ethics Statement"
1338
+ },
1339
+ {
1340
+ "type": "text",
1341
+ "bbox": [
1342
+ 0.115,
1343
+ 0.26,
1344
+ 0.467,
1345
+ 0.277
1346
+ ],
1347
+ "angle": 0,
1348
+ "content": "No ethical approval was required for this study."
1349
+ },
1350
+ {
1351
+ "type": "title",
1352
+ "bbox": [
1353
+ 0.115,
1354
+ 0.288,
1355
+ 0.311,
1356
+ 0.304
1357
+ ],
1358
+ "angle": 0,
1359
+ "content": "Availability Statement"
1360
+ },
1361
+ {
1362
+ "type": "text",
1363
+ "bbox": [
1364
+ 0.114,
1365
+ 0.314,
1366
+ 0.489,
1367
+ 0.362
1368
+ ],
1369
+ "angle": 0,
1370
+ "content": "The codes related to this study have been uploaded to the open source community at https://github.com/ZongqianLi/500xCompressor."
1371
+ },
1372
+ {
1373
+ "type": "title",
1374
+ "bbox": [
1375
+ 0.115,
1376
+ 0.389,
1377
+ 0.214,
1378
+ 0.403
1379
+ ],
1380
+ "angle": 0,
1381
+ "content": "References"
1382
+ },
1383
+ {
1384
+ "type": "ref_text",
1385
+ "bbox": [
1386
+ 0.117,
1387
+ 0.412,
1388
+ 0.49,
1389
+ 0.478
1390
+ ],
1391
+ "angle": 0,
1392
+ "content": "Xin Cheng, Xun Wang, Xingxing Zhang, Tao Ge, Si-Qing Chen, Furu Wei, Huishuai Zhang, and Dongyan Zhao. 2024. xrag: Extreme context compression for retrieval-augmented generation with one token. arXiv preprint arXiv:2405.13792."
1393
+ },
1394
+ {
1395
+ "type": "ref_text",
1396
+ "bbox": [
1397
+ 0.117,
1398
+ 0.488,
1399
+ 0.489,
1400
+ 0.541
1401
+ ],
1402
+ "angle": 0,
1403
+ "content": "Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. In The 2023 Conference on Empirical Methods in Natural Language Processing."
1404
+ },
1405
+ {
1406
+ "type": "ref_text",
1407
+ "bbox": [
1408
+ 0.117,
1409
+ 0.551,
1410
+ 0.489,
1411
+ 0.629
1412
+ ],
1413
+ "angle": 0,
1414
+ "content": "Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eun-sol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP."
1415
+ },
1416
+ {
1417
+ "type": "ref_text",
1418
+ "bbox": [
1419
+ 0.117,
1420
+ 0.64,
1421
+ 0.49,
1422
+ 0.706
1423
+ ],
1424
+ "angle": 0,
1425
+ "content": "Tao Ge, Hu Jing, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. 2024. In-context autoencoder for context compression in a large language model. In The Twelfth International Conference on Learning Representations."
1426
+ },
1427
+ {
1428
+ "type": "ref_text",
1429
+ "bbox": [
1430
+ 0.117,
1431
+ 0.716,
1432
+ 0.49,
1433
+ 0.768
1434
+ ],
1435
+ "angle": 0,
1436
+ "content": "Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, ..., and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783."
1437
+ },
1438
+ {
1439
+ "type": "ref_text",
1440
+ "bbox": [
1441
+ 0.117,
1442
+ 0.779,
1443
+ 0.489,
1444
+ 0.845
1445
+ ],
1446
+ "angle": 0,
1447
+ "content": "Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations."
1448
+ },
1449
+ {
1450
+ "type": "ref_text",
1451
+ "bbox": [
1452
+ 0.117,
1453
+ 0.855,
1454
+ 0.49,
1455
+ 0.921
1456
+ ],
1457
+ "angle": 0,
1458
+ "content": "Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023a. LLMLingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro"
1459
+ },
1460
+ {
1461
+ "type": "list",
1462
+ "bbox": [
1463
+ 0.117,
1464
+ 0.412,
1465
+ 0.49,
1466
+ 0.921
1467
+ ],
1468
+ "angle": 0,
1469
+ "content": null
1470
+ },
1471
+ {
1472
+ "type": "ref_text",
1473
+ "bbox": [
1474
+ 0.529,
1475
+ 0.086,
1476
+ 0.882,
1477
+ 0.113
1478
+ ],
1479
+ "angle": 0,
1480
+ "content": "cessing, pages 13358-13376, Singapore. Association for Computational Linguistics."
1481
+ },
1482
+ {
1483
+ "type": "ref_text",
1484
+ "bbox": [
1485
+ 0.512,
1486
+ 0.122,
1487
+ 0.885,
1488
+ 0.188
1489
+ ],
1490
+ "angle": 0,
1491
+ "content": "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023b. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. arXiv preprint arXiv:2310.06839."
1492
+ },
1493
+ {
1494
+ "type": "ref_text",
1495
+ "bbox": [
1496
+ 0.512,
1497
+ 0.197,
1498
+ 0.885,
1499
+ 0.29
1500
+ ],
1501
+ "angle": 0,
1502
+ "content": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics."
1503
+ },
1504
+ {
1505
+ "type": "ref_text",
1506
+ "bbox": [
1507
+ 0.512,
1508
+ 0.298,
1509
+ 0.885,
1510
+ 0.389
1511
+ ],
1512
+ "angle": 0,
1513
+ "content": "Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5376-5384."
1514
+ },
1515
+ {
1516
+ "type": "ref_text",
1517
+ "bbox": [
1518
+ 0.512,
1519
+ 0.399,
1520
+ 0.885,
1521
+ 0.517
1522
+ ],
1523
+ "angle": 0,
1524
+ "content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466."
1525
+ },
1526
+ {
1527
+ "type": "ref_text",
1528
+ "bbox": [
1529
+ 0.512,
1530
+ 0.526,
1531
+ 0.885,
1532
+ 0.619
1533
+ ],
1534
+ "angle": 0,
1535
+ "content": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics."
1536
+ },
1537
+ {
1538
+ "type": "ref_text",
1539
+ "bbox": [
1540
+ 0.512,
1541
+ 0.627,
1542
+ 0.885,
1543
+ 0.719
1544
+ ],
1545
+ "angle": 0,
1546
+ "content": "Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics."
1547
+ },
1548
+ {
1549
+ "type": "ref_text",
1550
+ "bbox": [
1551
+ 0.512,
1552
+ 0.728,
1553
+ 0.885,
1554
+ 0.807
1555
+ ],
1556
+ "angle": 0,
1557
+ "content": "Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333-342, Vancouver, Canada. Association for Computational Linguistics."
1558
+ },
1559
+ {
1560
+ "type": "ref_text",
1561
+ "bbox": [
1562
+ 0.512,
1563
+ 0.816,
1564
+ 0.885,
1565
+ 0.921
1566
+ ],
1567
+ "angle": 0,
1568
+ "content": "Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online. Association for Computational Linguistics."
1569
+ },
1570
+ {
1571
+ "type": "list",
1572
+ "bbox": [
1573
+ 0.512,
1574
+ 0.086,
1575
+ 0.885,
1576
+ 0.921
1577
+ ],
1578
+ "angle": 0,
1579
+ "content": null
1580
+ },
1581
+ {
1582
+ "type": "page_number",
1583
+ "bbox": [
1584
+ 0.477,
1585
+ 0.928,
1586
+ 0.526,
1587
+ 0.941
1588
+ ],
1589
+ "angle": 0,
1590
+ "content": "25089"
1591
+ }
1592
+ ],
1593
+ [
1594
+ {
1595
+ "type": "text",
1596
+ "bbox": [
1597
+ 0.116,
1598
+ 0.086,
1599
+ 0.49,
1600
+ 0.166
1601
+ ],
1602
+ "angle": 0,
1603
+ "content": "Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin. 2023. Compressing context to enhance inference efficiency of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6342-6353, Singapore. Association for Computational Linguistics."
1604
+ },
1605
+ {
1606
+ "type": "text",
1607
+ "bbox": [
1608
+ 0.115,
1609
+ 0.174,
1610
+ 0.49,
1611
+ 0.241
1612
+ ],
1613
+ "angle": 0,
1614
+ "content": "Jesse Mu, Xiang Lisa Li, and Noah Goodman. 2024. Learning to compress prompts with gist tokens. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Red Hook, NY, USA. Curran Associates Inc."
1615
+ },
1616
+ {
1617
+ "type": "text",
1618
+ "bbox": [
1619
+ 0.115,
1620
+ 0.25,
1621
+ 0.49,
1622
+ 0.369
1623
+ ],
1624
+ "angle": 0,
1625
+ "content": "Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Ruhle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, and Dongmei Zhang. 2024. LLMingua-2: Data distillation for efficient and faithful task-agnostic prompt compression. In Findings of the Association for Computational Linguistics: ACL 2024, pages 963–981, Bangkok, Thailand. Association for Computational Linguistics."
1626
+ },
1627
+ {
1628
+ "type": "text",
1629
+ "bbox": [
1630
+ 0.115,
1631
+ 0.378,
1632
+ 0.49,
1633
+ 0.43
1634
+ ],
1635
+ "angle": 0,
1636
+ "content": "David Rau, Shuai Wang, Hervé Déjean, and Stephane Clinchant. 2024. Context embeddings for efficient answer generation in rag. arXiv preprint arXiv:2407.09252."
1637
+ },
1638
+ {
1639
+ "type": "text",
1640
+ "bbox": [
1641
+ 0.115,
1642
+ 0.441,
1643
+ 0.49,
1644
+ 0.495
1645
+ ],
1646
+ "angle": 0,
1647
+ "content": "Siyu Ren, Qi Jia, and Kenny Q. Zhu. 2023. Context compression for auto-regressive transformers with sentinel tokens. In The 2023 Conference on Empirical Methods in Natural Language Processing."
1648
+ },
1649
+ {
1650
+ "type": "text",
1651
+ "bbox": [
1652
+ 0.115,
1653
+ 0.503,
1654
+ 0.49,
1655
+ 0.596
1656
+ ],
1657
+ "angle": 0,
1658
+ "content": "David Wingate, Mohammad Shoeybi, and Taylor Sorensen. 2022. Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5621-5634, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics."
1659
+ },
1660
+ {
1661
+ "type": "title",
1662
+ "bbox": [
1663
+ 0.51,
1664
+ 0.085,
1665
+ 0.651,
1666
+ 0.101
1667
+ ],
1668
+ "angle": 0,
1669
+ "content": "A Appendices"
1670
+ },
1671
+ {
1672
+ "type": "title",
1673
+ "bbox": [
1674
+ 0.51,
1675
+ 0.11,
1676
+ 0.685,
1677
+ 0.126
1678
+ ],
1679
+ "angle": 0,
1680
+ "content": "A.1 Model Training"
1681
+ },
1682
+ {
1683
+ "type": "text",
1684
+ "bbox": [
1685
+ 0.508,
1686
+ 0.131,
1687
+ 0.885,
1688
+ 0.274
1689
+ ],
1690
+ "angle": 0,
1691
+ "content": "The training parameters for \\(500\\mathrm{x}\\) Compressor and ICAE are detailed in Table 7. Main packages are torch 2.3.1 and transformers 4.42.3, and the full environment can be got from Section 7. The evaluation losses for both models are illustrated in Figure 4. All models have successfully converged, with \\(500\\mathrm{x}\\) Compressor demonstrating better performance compared to ICAE, as indicated by the evaluation loss."
1692
+ },
1693
+ {
1694
+ "type": "table",
1695
+ "bbox": [
1696
+ 0.527,
1697
+ 0.283,
1698
+ 0.868,
1699
+ 0.355
1700
+ ],
1701
+ "angle": 0,
1702
+ "content": "<table><tr><td></td><td colspan=\"2\">Pretraining</td><td colspan=\"2\">Finetuning</td></tr><tr><td></td><td>500→16</td><td>500→1</td><td>500→16</td><td>500→1</td></tr><tr><td>Total steps</td><td>42000</td><td>103800</td><td>20000</td><td>10000</td></tr><tr><td>Warm-up steps</td><td>300</td><td>300</td><td>300</td><td>300</td></tr><tr><td>Learning rate</td><td>1e-4</td><td>1e-4</td><td>5e-5</td><td>5e-5</td></tr><tr><td>Batch size</td><td>4</td><td>4</td><td>4</td><td>4</td></tr><tr><td>Optimizer</td><td>AdamW</td><td>AdamW</td><td>AdamW</td><td>AdamW</td></tr></table>"
1703
+ },
1704
+ {
1705
+ "type": "table_caption",
1706
+ "bbox": [
1707
+ 0.509,
1708
+ 0.364,
1709
+ 0.883,
1710
+ 0.391
1711
+ ],
1712
+ "angle": 0,
1713
+ "content": "Table 7: Training parameters for \\( {500}\\mathrm{x} \\) Compressor and ICAE."
1714
+ },
1715
+ {
1716
+ "type": "title",
1717
+ "bbox": [
1718
+ 0.51,
1719
+ 0.423,
1720
+ 0.838,
1721
+ 0.438
1722
+ ],
1723
+ "angle": 0,
1724
+ "content": "A.2 ArxivCorpus and ArxivQA Dataset"
1725
+ },
1726
+ {
1727
+ "type": "text",
1728
+ "bbox": [
1729
+ 0.51,
1730
+ 0.444,
1731
+ 0.852,
1732
+ 0.49
1733
+ ],
1734
+ "angle": 0,
1735
+ "content": "Source of Arxiv abstracts in the ArxivCorpus: https://www.kaggle.com/datasets/ Cornell-University/arxiv"
1736
+ },
1737
+ {
1738
+ "type": "text",
1739
+ "bbox": [
1740
+ 0.51,
1741
+ 0.492,
1742
+ 0.882,
1743
+ 0.522
1744
+ ],
1745
+ "angle": 0,
1746
+ "content": "The detailed information for ArxivCorpus and the ArxivQA dataset is shown in Table 8."
1747
+ },
1748
+ {
1749
+ "type": "text",
1750
+ "bbox": [
1751
+ 0.528,
1752
+ 0.524,
1753
+ 0.808,
1754
+ 0.54
1755
+ ],
1756
+ "angle": 0,
1757
+ "content": "The prompt to generate the QA pairs:"
1758
+ },
1759
+ {
1760
+ "type": "code",
1761
+ "bbox": [
1762
+ 0.548,
1763
+ 0.551,
1764
+ 0.842,
1765
+ 0.694
1766
+ ],
1767
+ "angle": 0,
1768
+ "content": "context: {truncated_context} \ntask: design the {number} best extractive question answering pairs for the context to test information loss \nrequirement: the question should be direct; the question should try to use the same words in the context; the answer should directly appear in the context (a span of the context); the answer should not be in the question; just output the results in format and do not output other words \noutput json format: {{\"id\":1, \"question\": \"\", \"answer\": \"\", {\"id\":2, \"question\": \"\", \"answer\": \"\", ...]}"
1769
+ },
1770
+ {
1771
+ "type": "title",
1772
+ "bbox": [
1773
+ 0.51,
1774
+ 0.715,
1775
+ 0.722,
1776
+ 0.73
1777
+ ],
1778
+ "angle": 0,
1779
+ "content": "A.3 Question Answering"
1780
+ },
1781
+ {
1782
+ "type": "text",
1783
+ "bbox": [
1784
+ 0.51,
1785
+ 0.735,
1786
+ 0.704,
1787
+ 0.75
1788
+ ],
1789
+ "angle": 0,
1790
+ "content": "Prompt for QA (Instruct):"
1791
+ },
1792
+ {
1793
+ "type": "text",
1794
+ "bbox": [
1795
+ 0.547,
1796
+ 0.762,
1797
+ 0.825,
1798
+ 0.8
1799
+ ],
1800
+ "angle": 0,
1801
+ "content": "Please finish the extractive question answering task. Just output the answer. Context: {context} Question: {question} Answer:"
1802
+ },
1803
+ {
1804
+ "type": "text",
1805
+ "bbox": [
1806
+ 0.509,
1807
+ 0.822,
1808
+ 0.883,
1809
+ 0.852
1810
+ ],
1811
+ "angle": 0,
1812
+ "content": "This paper and codes are helped with ChatGPT and Claude."
1813
+ },
1814
+ {
1815
+ "type": "page_number",
1816
+ "bbox": [
1817
+ 0.477,
1818
+ 0.928,
1819
+ 0.526,
1820
+ 0.941
1821
+ ],
1822
+ "angle": 0,
1823
+ "content": "25090"
1824
+ }
1825
+ ],
1826
+ [
1827
+ {
1828
+ "type": "table",
1829
+ "bbox": [
1830
+ 0.169,
1831
+ 0.182,
1832
+ 0.83,
1833
+ 0.234
1834
+ ],
1835
+ "angle": 0,
1836
+ "content": "<table><tr><td rowspan=\"2\"></td><td rowspan=\"2\">Train</td><td colspan=\"2\">ArxivCorpus</td><td rowspan=\"2\">Test</td><td rowspan=\"2\">Train</td><td colspan=\"2\">ArxivQA Dataset</td></tr><tr><td>Development</td><td>Test</td><td>Development</td><td>Test</td></tr><tr><td>Number of data records</td><td>2353924</td><td>3000</td><td>2500</td><td>250000</td><td>2500</td><td>5000</td><td></td></tr><tr><td>Knowledge cutoff</td><td>Pre 07/2023</td><td>01-04/2024</td><td>01-04/2024</td><td>Pre 07/2023</td><td>Pre 07/2023</td><td>01-04/2024</td><td></td></tr><tr><td>Source</td><td colspan=\"3\">Abstracts from Arxiv</td><td colspan=\"2\">Train set of ArxivCorpus</td><td colspan=\"2\">Test set of ArxivCorpus</td></tr></table>"
1837
+ },
1838
+ {
1839
+ "type": "table_caption",
1840
+ "bbox": [
1841
+ 0.23,
1842
+ 0.243,
1843
+ 0.768,
1844
+ 0.258
1845
+ ],
1846
+ "angle": 0,
1847
+ "content": "Table 8: Detailed information about the ArxivCorpus and the ArxivQA dataset."
1848
+ },
1849
+ {
1850
+ "type": "image",
1851
+ "bbox": [
1852
+ 0.131,
1853
+ 0.467,
1854
+ 0.499,
1855
+ 0.617
1856
+ ],
1857
+ "angle": 0,
1858
+ "content": null
1859
+ },
1860
+ {
1861
+ "type": "image",
1862
+ "bbox": [
1863
+ 0.502,
1864
+ 0.467,
1865
+ 0.868,
1866
+ 0.615
1867
+ ],
1868
+ "angle": 0,
1869
+ "content": null
1870
+ },
1871
+ {
1872
+ "type": "image",
1873
+ "bbox": [
1874
+ 0.131,
1875
+ 0.633,
1876
+ 0.495,
1877
+ 0.793
1878
+ ],
1879
+ "angle": 0,
1880
+ "content": null
1881
+ },
1882
+ {
1883
+ "type": "image",
1884
+ "bbox": [
1885
+ 0.501,
1886
+ 0.633,
1887
+ 0.867,
1888
+ 0.793
1889
+ ],
1890
+ "angle": 0,
1891
+ "content": null
1892
+ },
1893
+ {
1894
+ "type": "image_caption",
1895
+ "bbox": [
1896
+ 0.185,
1897
+ 0.803,
1898
+ 0.812,
1899
+ 0.818
1900
+ ],
1901
+ "angle": 0,
1902
+ "content": "Figure 4: Evaluation loss for \\(500\\mathrm{x}\\) Compressor and ICAE during pretraining and fine-tuning."
1903
+ },
1904
+ {
1905
+ "type": "page_number",
1906
+ "bbox": [
1907
+ 0.477,
1908
+ 0.928,
1909
+ 0.524,
1910
+ 0.941
1911
+ ],
1912
+ "angle": 0,
1913
+ "content": "25091"
1914
+ }
1915
+ ]
1916
+ ]
2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/03c637d4-7f7c-452b-825b-cd37087e895b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b64927118a51d1a1f7c2049804f6cc0d9ef7f2dd9eda8469aea28f82be35314a
3
+ size 1657054
2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/full.md ADDED
@@ -0,0 +1,312 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 500xCompressor: Generalized Prompt Compression for Large Language Models
2
+
3
+ Zongqian Li
4
+ University of Cambridge
5
+ z1510@cam.ac.uk
6
+
7
+ Yixuan Su
8
+ University of Cambridge
9
+ ys484@cam.ac.uk
10
+
11
+ Nigel Collier
12
+ University of Cambridge
13
+ nhc30@cam.ac.uk
14
+
15
+ # Abstract
16
+
17
+ Prompt compression is important for large language models (LLMs) to increase inference speed, reduce costs, and improve user experience. However, current methods face challenges such as low compression ratios and potential training-test overlap during evaluation. To address these issues, we propose 500xCompressor, a method that compresses natural language contexts into a minimum of one special token and demonstrates strong generalization ability. The 500xCompressor introduces approximately $0.3\%$ additional parameters and achieves compression ratios ranging from 6x to 500x, achieving $27 - 90\%$ reduction in calculations and $55 - 83\%$ memory savings when generating 100-400 tokens for new and reused prompts at 500x compression, while retaining $70 - 74\%$ (F1) and $77 - 84\%$ (Exact Match) of the LLM capabilities compared to using non-compressed prompts. It is designed to compress any text, answer various types of questions, and can be utilized by the original LLM without requiring fine-tuning. Initially, 500xCompressor was pretrained on the ArxivCorpus, followed by fine-tuning on the ArxivQA dataset, and subsequently evaluated on strictly unseen and cross-domain question answering (QA) datasets. This study shows that KV values outperform embeddings in preserving information at high compression ratios. The highly compressive nature of natural language prompts, even for detailed information, suggests potential for future applications and the development of a new LLM language.
18
+
19
+ # 1 Introduction
20
+
21
+ Long prompts present several challenges in natural language processing applications, such as decreased inference speed, higher computation cost, and a negative influence on user experience. Additionally, the context length limit restricts model
22
+
23
+ <https://github.com/ZongqianLi/500xCompressor>
24
+
25
+ ![](images/c88f2f2ba27aaf18daa21dc5d042bfd8cf899928b2fb6ae02a70f13753832852.jpg)
26
+ Figure 1: The original text is compressed by $500\mathrm{x}$ Compressor and utilized for downstream tasks.
27
+
28
+ performance and application scenarios, creating a strong demand for prompt length reduction.
29
+
30
+ Two primary methods for prompt compression have been proposed: hard prompt and soft prompt. Hard prompt methods, such as SelectiveSentence (Li et al., 2023) and LLMLingua (Jiang et al., 2023a), eliminate low-information sentences, words, or even tokens. On the other hand, soft prompt methods, including GIST (Mu et al., 2024), AutoCompressor (Chevalier et al., 2023), and ICAE (Ge et al., 2024), compress natural language tokens into a small number of special tokens. However, these methods have problems such as low compression ratios (low efficiency improvement), unclear information loss, and potential
31
+
32
+ ![](images/9843984ec7c636a0bb3ae0a5acda0d51c44f6d6be5b60801de05bb8210dc47ff.jpg)
33
+ Figure 2: Process of pretraining (left), fine-tuning (middle), and prediction (right) with $500\mathrm{x}$ Compressor.
34
+
35
+ training-test overlap during evaluation, as discussed in detail in Section 6. For instance, ICAE achieves compression ratios no higher than $15\mathrm{x}$ , and the win rate evaluation metric cannot quantitatively measure the extent of information loss during compression. Additionally, evaluation texts sourced from the Wikipedia dataset might overlap with the training data for LLaMa series models (Grattafori et al., 2024), raising questions that the generated content could be retrieved from the memory of the LLM rather than the compressed prompts.
36
+
37
+ To solve these problems, we propose 500xCompressor, illustrated in Figure 1. This method compresses prompts of approximately 500 tokens into a minimum of one token, allowing the compressed tokens to regenerate the original texts or be used for QA. Although trained on the ArxivCorpus and ArxivQA dataset, 500xCompressor could generalize to answer other types of questions. Analysis demonstrates that detailed information, such as proper nouns, special names, and numbers, could be accurately compressed and retrieved.
38
+
39
+ 500xCompressor retains the advantages of previous methods and introduces several additional characteristics. Similar to previous soft prompt methods, 500xCompressor is generalized and nonselective, capable of compressing unseen texts across various topics for QA, demonstrating its generalization ability. Unlike selective compression methods, 500xCompressor aims to regenerate the entire original text, ensuring that all tokens in the original text contribute to the compression tokens. Moreover, the compressed prompts could be used to regenerate original texts or for QA without requiring fine-tuning of the LLM, preserving the LLM's original capabilities and improving the convenience of using compressed tokens.
40
+
41
+ In addition to these existing advantages, we provide contributions in three main areas:
42
+
43
+ - High Compression Ratio: This study evaluates the compression model with one and sixteen tokens to compress up to 500 tokens, achieving compression ratios up to $500\mathrm{x}$ . These ratios significantly outperform previous studies, which reported ratios of less than $50\mathrm{x}$ , fully testing the upper limit of prompt compression.
44
+ - Strict Unseen Evaluation: Using Arxiv abstracts published post-January 2024 ensures evaluation on content unseen by both the LLM and compression model, verifying that outputs are from compressed prompts rather than pre-existing model knowledge.
45
+ - Quantitative Analysis of Information Loss: Through extractive QA with context-span answers, we realize direct quantitative comparison between compressed and uncompressed performance, providing precise measurements of compression-resulting information loss.
46
+
47
+ In this paper, the design of $500\mathrm{x}$ Compressor is first introduced in Section 2, including how to train and use the compression model. After that, Section 3 explains the training and evaluation datasets, the baselines, and the evaluation metrics. The evaluation results for regeneration and QA are presented in Section 4, with ablation studies analyzing the variables influencing the compression models. This is followed by discussions in Section 5, and finally, the sections on related work and conclusions.
48
+
49
+ # 2 Methods
50
+
51
+ # 2.1 Training
52
+
53
+ The training process for the compression model is illustrated in Figure 2, including both pretraining and fine-tuning stages. The compression model
54
+
55
+ comprises two components: an encoder and a decoder, which is similar to an autoencoder and comparable to ICAE. The encoder is the frozen LLM $\Theta_{\mathrm{LLM}}$ with trainable LoRA parameters $\Theta_{\mathrm{Lora}}$ (Hu et al., 2022), while the decoder is the original frozen LLM $\Theta_{\mathrm{LLM}}$ . The encoder receives the original text tokens $\mathbf{T} = (t_1, t_2, \dots, t_l)$ and the compression tokens $\mathbf{C} = (c_1, c_2, \dots, c_k)$ . Through layers, the information in the text is saved into the compression tokens, whose KV values in each layer of the LLM $\mathbf{H}_{\mathbf{C}}$ are output and passed to the decoder.
56
+
57
+ During pretraining, the inputs of the decoder are the KV values of the compression tokens from the encoder, the beginning of sequence token, and the original text tokens $(\mathbf{H}_{\mathbf{C}},[\mathbf{BOS}],\mathbf{T})$ . The LLM is trained to regenerate the original text based on the KV values, using the end of sequence token [EOS] to denote when to stop. The cross-entropy loss between the output of the decoder and the original text is calculated and used to train the LoRA parameters in the encoder:
58
+
59
+ $$
60
+ \mathcal {L} _ {\mathrm {P}} = - \sum_ {i = 1} ^ {l} \log P \left(t _ {i} \mid \mathbf {H} _ {\mathbf {C}}, [ \mathbf {B O S} ], t _ {1: i - 1}; \Theta_ {\mathrm {L L M}}, \Theta_ {\mathrm {L o r a}}\right) \tag {1}
61
+ $$
62
+
63
+ For instruction fine-tuning, the process is similar to pretraining. However, instead of the original texts, the decoder is provided with questions $\mathbf{Q} = (q_{1}, q_{2}, \ldots, q_{m})$ and answers $\mathbf{A} = (a_{1}, a_{2}, \ldots, a_{n})$ , which are used to train the LLM to retrieve information from the KV values of the compression tokens and generate answers:
64
+
65
+ $$
66
+ \mathcal {L} _ {\mathrm {F}} = - \sum_ {j = 1} ^ {n} \log P \left(a _ {j} \mid \mathbf {H} _ {\mathbf {C}}, q _ {1: m}, a _ {1: j - 1}; \Theta_ {\text {L L M}}, \Theta_ {\text {L o r a}}\right) \tag {2}
67
+ $$
68
+
69
+ The training process ensures no training-test overlap, as the original LLM parameters in both the encoder and decoder remain unchanged, and no additional parameters are introduced in the decoder. Thus, no information is saved in the decoder.
70
+
71
+ Main differences between 500xCompressor and ICAE: (1) The input of ICAE decoder is the output embeddings for the compression tokens, whereas 500xCompressor uses the KV values for the compression tokens. KV values could save more information and do not increase inference time. (2) In addition, this paper uses the [BOS] token to guide the LLM to regenerate the compressed texts, while ICAE creates a trainable new token.
72
+
73
+ # 2.2 Prediction
74
+
75
+ During prediction, all encoder and decoder parameters are frozen. The original text is fed into the encoder, which saves the information into compression tokens. These compression tokens' KV values are then input into the decoder, which regenerates the compressed text when guided by the [BOS] token or generates an answer based on a given question:
76
+
77
+ $$
78
+ \hat {t} _ {i} = \underset {\hat {t} _ {i}} {\arg \max } P (\hat {t} _ {i} | \mathbf {H} _ {\mathbf {C}}, [ \mathbf {B O S} ], \hat {t} _ {1: i - 1}; \boldsymbol {\Theta} _ {\mathrm {L L M}}) \tag {3}
79
+ $$
80
+
81
+ $$
82
+ \hat {a} _ {j} = \arg \max _ {\hat {a} _ {j}} P (\hat {a} _ {j} | \mathbf {H} _ {\mathbf {C}}, q _ {1: m}, \hat {a} _ {1: j - 1}; \boldsymbol {\Theta} _ {\mathrm {L L M}}) \tag {4}
83
+ $$
84
+
85
+ where $\hat{t}_i$ denotes the $i$ -th token in the regenerated text, and $\hat{a}_j$ indicates the $j$ -th token in the generated answer.
86
+
87
+ By replacing the original text tokens with compressed tokens, the speed of answering questions is increased. This is because, in inference, each token in the question or generated answer must attend to the previous tokens. Replacing a large number of original text tokens with a small number of compressed tokens reduces computational needs.
88
+
89
+ # 3 Experiments
90
+
91
+ # 3.1 Datasets
92
+
93
+ The ArxivCorpus was used to pretrain 500xCompressor, and the compression model was then finetuned on the ArxivQA dataset. After that, six benchmarks with different context lengths were used to evaluate the compression models for various abilities: ArxivQA and TriviaQA (Joshi et al., 2017) for information extraction, RelationExtraction (Levy et al., 2017) for relation extraction, NaturalQuestions (Kwiatkowski et al., 2019) and TextbookQA (Kembhavi et al., 2017) for reading comprehension, and RACE (Lai et al., 2017) for reasoning. Among these datasets, ArxivQA is introduced in this paper, while the others are classical QA datasets from MRQA (Fisch et al., 2019).
94
+
95
+ The ArxivCorpus comprises Arxiv paper abstracts published before April 2024, with pre-July 2023 papers forming the training set and post-January 2024 papers forming the development and test sets. Test set abstracts are selected by token lengths (96, 192, 288, 384, and 480) to evaluate the regeneration performance of prompt compression methods.
96
+
97
+ The ArxivCorpus was chosen for several reasons: (1) High-quality academic content with clear publication timestamps, (2) Verified temporal separation from LLaMa-3's March 2023 knowledge cutoff, ensuring the regenerated texts are based on the compressed prompts rather than the memory of the LLM, and (3) Official distribution through Cornell University, addressing copyright problems that influence datasets like Pile.
98
+
99
+ The ArxivQA dataset (more than 250k QA pairs), derived from ArxivCorpus using LLaMa-3-70b-Instruct, contains extractive QA pairs with the number of QA pairs increasing proportionally with abstract length (starting with 5 pairs per 96-token abstract). Training and development QA pairs are generated from the training set abstracts, while test set pairs come from test set abstracts.
100
+
101
+ ArxivQA offers three main advantages: (1) Guaranteed LLM-unseen test contexts avoiding training-test overlap, (2) Extractive QA format allowing quantitative evaluation of information loss, and (3) Domain-specific questions generated by LLaMa-3-70b-Instruct based on ArxivCorpus ensuring both difficulty and quality.
102
+
103
+ # 3.2 Baselines and Gold Standard
104
+
105
+ Two baseline approaches are chosen: LLMLingua2 (Pan et al., 2024), exemplifying hard prompt compression through selective token elimination, and ICAE, utilizing soft prompt compression via continuous vectors. Both methods process the compressed context alongside questions for LLM inference. The gold standard provides the LLM with the complete combination of instruction, uncompressed context, and question.
106
+
107
+ # 3.3 Evaluation Metrics
108
+
109
+ For evaluating text regeneration, Rouge-2-F (based on bigram overlap) and BLEU (measuring n-gram precision) scores are used to assess the similarity between regenerated and original texts. For extractive QA tasks, F1 score (the harmonic mean of precision and recall) and Exact Match (EM) are used to measure answer accuracy. Higher scores in all these metrics indicate better performance, with a maximum value of $100\%$ .
110
+
111
+ # 3.4 Models
112
+
113
+ The encoder of 500xCompressor is frozen LLaMA-3-8B-Instruct with trainable LoRA parameters (rank=64) and the decoder is frozen LLaMA-3-8B-Instruct (Grattafori et al., 2024).
114
+
115
+ # 4 Results
116
+
117
+ # 4.1 Efficiency Improvement
118
+
119
+ Table 1 demonstrates the importance of prompt compression for efficiency gains, showing improvements in both first-time processing (new prompt) and cached processing scenarios (reused prompt) at $500\mathrm{x}$ compression. For new prompts, while compression introduces a minimal computational cost $(+0.4\%)$ , the savings increase substantially with output length, reaching $49.10\%$ reduction in computation at 400 tokens. Reused prompts demonstrate immediate computational benefits, achieving $90.64\%$ reduction at 100 tokens output length. For memory usage of KV cache, reused prompts achieve $99.80\%$ initial memory reduction, and both prompts show consistent memory savings from $83.16\%$ to $55.33\%$ as output length increases to 400 tokens. Given that real-world applications often involve batch processing and repeated access to the same content, these efficiency gains make $500\mathrm{x}$ Compressor valuable in real-world scenarios.
120
+
121
+ <table><tr><td rowspan="2">Output Length</td><td colspan="2">Calculations</td><td colspan="2">Memory</td></tr><tr><td>New prompt</td><td>Reused prompt</td><td>New prompt</td><td>Reused prompt</td></tr><tr><td>0</td><td>+0.4</td><td>0</td><td>+0.2</td><td>-99.80</td></tr><tr><td>100</td><td>-27.39</td><td>-90.64</td><td>-83.16</td><td>-83.16</td></tr><tr><td>200</td><td>-40.47</td><td>-83.09</td><td>-71.28</td><td>-71.28</td></tr><tr><td>300</td><td>-46.56</td><td>-76.71</td><td>-62.37</td><td>-62.37</td></tr><tr><td>400</td><td>-49.10</td><td>-71.23</td><td>-55.33</td><td>-55.33</td></tr></table>
122
+
123
+ Table 1: Computation and memory savings (in percentage) achieved by $500\mathrm{x}$ Compressor for different output lengths (token) at $500\rightarrow 1$ compression. A new prompt refers to first-time processing of input, while a reused prompt denotes repeated processing that can utilize cached intermediate results.
124
+
125
+ # 4.2 Text Regeneration
126
+
127
+ The text regeneration capabilities of different prompt compression methods are evaluated on the strictly unseen test set of ArxivCorpus. Table 2 shows the performance across varying context lengths and compression ratios, measured by Rouge-2-F and BLEU scores. Our analysis examines the overall advantages, influencing variables, and stability of the improvements.
128
+
129
+ 500xCompressor demonstrates consistent better performance over ICAE across all test scenarios. Our method outperforms ICAE on both Rouge-2-F and BLEU metrics for all context lengths and compression ratios tested. Quantitatively, the average improvement for Rouge-2-F scores increases by
130
+
131
+ <table><tr><td rowspan="2">Dataset Length Eval. Metrics</td><td colspan="2">96</td><td colspan="2">192</td><td colspan="2">ArxivCorpus</td><td colspan="2">384</td><td colspan="2">480</td><td colspan="2">Average</td></tr><tr><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td><td>RG</td><td>BL</td></tr><tr><td>Ours500→16</td><td>99.53</td><td>99.48</td><td>96.49</td><td>96.21</td><td>82.31</td><td>80.93</td><td>55.36</td><td>53.50</td><td>31.55</td><td>32.19</td><td>73.05</td><td>72.46</td></tr><tr><td>ICAE500→16</td><td>83.52</td><td>81.85</td><td>58.21</td><td>55.90</td><td>40.96</td><td>38.37</td><td>34.28</td><td>32.03</td><td>29.71</td><td>29.61</td><td>49.33</td><td>47.55</td></tr><tr><td>Absolute Δ</td><td>16.02</td><td>17.62</td><td>38.28</td><td>40.31</td><td>41.35</td><td>42.56</td><td>21.07</td><td>21.46</td><td>1.84</td><td>2.58</td><td>23.71</td><td>24.91</td></tr><tr><td>Relative Δ</td><td>19.19%</td><td>21.53%</td><td>65.76%</td><td>72.12%</td><td>100.96%</td><td>110.92%</td><td>61.47%</td><td>66.98%</td><td>6.20%</td><td>8.71%</td><td>48.07%</td><td>52.38%</td></tr><tr><td>Ours500→1</td><td>53.49</td><td>49.77</td><td>29.73</td><td>26.53</td><td>22.15</td><td>19.15</td><td>20.61</td><td>17.91</td><td>18.85</td><td>18.80</td><td>28.97</td><td>26.43</td></tr><tr><td>ICAE500→1</td><td>30.29</td><td>24.18</td><td>18.21</td><td>13.94</td><td>13.89</td><td>10.36</td><td>13.36</td><td>9.92</td><td>12.28</td><td>11.68</td><td>17.61</td><td>14.02</td></tr><tr><td>Absolute Δ</td><td>23.19</td><td>25.59</td><td>11.51</td><td>12.59</td><td>8.25</td><td>8.79</td><td>7.25</td><td>7.99</td><td>6.56</td><td>7.11</td><td>11.36</td><td>12.41</td></tr><tr><td>Relative Δ</td><td>76.58%</td><td>105.84%</td><td>63.22%</td><td>90.33%</td><td>59.45%</td><td>84.81%</td><td>54.30%</td><td>80.48%</td><td>53.44%</td><td>60.86%</td><td>64.50%</td><td>88.56%</td></tr></table>
132
+
133
+ Table 2: Quantitative evaluation of text regeneration performance on the ArxivCorpus dataset with strictly unseen texts. RG and BL denote Rouge-2-F and BLEU scores respectively. The notation $\mathrm{X}\rightarrow \mathrm{Y}$ indicates compression from a maximum of X input tokens to Y compression tokens. Higher values between $500\mathrm{xCompressor}$ (Ours) and ICAE baseline are shown in bold and their performance differences are shown by absolute and relative $\Delta$ . All improvements (shown in green) demonstrate the consistent better performance of $500\mathrm{xCompressor}$ across varying context lengths and compression ratios.
134
+
135
+ 23.71 points $(48.07\%)$ and 11.36 points $(64.50\%)$ at $31.25\mathrm{x}$ and $500\mathrm{x}$ .
136
+
137
+ The regeneration performance exhibits clear dependencies on both compression ratio and context length. Lower compression ratios and shorter contexts yield higher text precision, with Rouge-2-F and BLEU scores consistently exceeding $95\%$ in optimal conditions. As compression ratios increase, the relative improvements over ICAE become more obvious, showing relative gains of $64.50\%$ in Rouge-2-F and $88.56\%$ in BLEU scores. While performance naturally decreases with longer contexts, the decrease rate shows a stable trend across different compression scenarios.
138
+
139
+ The method exhibits consistent stability in performance gains. Both absolute and relative improvements remain uniform across Rouge-2-F and BLEU metrics, indicating robust improvement in regeneration quality regardless of the evaluation criteria used.
140
+
141
+ While the results above demonstrate good text regeneration ability, the true performance of compression is better shown in downstream QA tasks.
142
+
143
+ # 4.3 Question Answering
144
+
145
+ Figure 3 shows the performance of different prompt compression methods across varying compression ratios on QA datasets. $500\mathrm{x}$ Compressor consistently outperforms baseline methods at all compression ratios tested, confirming that KV values have advantages over embeddings (used in ICAE) in preserving information. Notably, as the compression ratio increases from $31.25\mathrm{x}$ to $500\mathrm{x}$ , $500\mathrm{x}$ Compressor exhibits better performance retention, dropping from $74.53\%$ to $70.60\%$ (F1 score) and from $84.57\%$ to $77.92\%$ (Exact Match) of its uncompressed performance.
146
+
147
+ Tables 3 and 4 present evaluation results for
148
+
149
+ ![](images/2df329bbb7090c3470d1949f910d488c466523b621f7d0f477c8976882c4c7b0.jpg)
150
+ Figure 3: Performance comparison of prompt compression methods on in-domain and cross-domain QA datasets across varying compression ratios. Y-axis shows F1 scores normalized by uncompressed performance, while X-axis (log scale) denotes compression ratios defined as #maximum_uncompressed_tokens/#compression_tokens. $\uparrow$ indicates higher values are better.
151
+
152
+ 500xCompressor on in-domain and cross-domain QA datasets. These results are analyzed from five aspects: overall performance, influencing variables, generalization capability, scalability, and stability.
153
+
154
+ Overall Performance 500xCompressor demonstrates higher performance across nearly all context lengths, compression ratios, and both in-domain and cross-domain datasets compared to ICAE and LLMLingua-2. In cross-domain evaluation, it achieves average improvements of 7.10 F1 and 7.61 EM points (19.94% and 37.66% relative) at $500\rightarrow 16$ compression, with improvements increasing to 21.93 F1 and 16.64 EM points (107.70% and 161.58% relative) at $500\rightarrow 1$ compression.
155
+
156
+ Performance variables The effectiveness of compression is influenced by both context length
157
+
158
+ <table><tr><td rowspan="2">Dataset Length Eval. Metrics</td><td colspan="2">96</td><td colspan="2">192</td><td colspan="6">ArxivQA</td><td colspan="2">Average</td></tr><tr><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td></tr><tr><td>Instruct</td><td>64.41</td><td>12.40</td><td>61.18</td><td>13.90</td><td>56.08</td><td>9.00</td><td>52.86</td><td>12.40</td><td>44.57</td><td>16.40</td><td>55.82</td><td>12.82</td></tr><tr><td>\( Lingua_{500}\rightarrow 64 \)</td><td>45.88</td><td>7.90</td><td>29.91</td><td>8.20</td><td>21.39</td><td>4.20</td><td>17.68</td><td>3.40</td><td>16.17</td><td>4.20</td><td>26.21</td><td>5.58</td></tr><tr><td>\( Lingua_{500}\rightarrow 32 \)</td><td>26.97</td><td>3.60</td><td>20.45</td><td>4.40</td><td>15.82</td><td>2.40</td><td>13.00</td><td>2.00</td><td>12.28</td><td>2.10</td><td>17.70</td><td>2.90</td></tr><tr><td>\( Ours_{500}\rightarrow 16 \)</td><td>60.49</td><td>25.60</td><td>47.65</td><td>16.50</td><td>35.50</td><td>8.40</td><td>30.00</td><td>7.10</td><td>31.98</td><td>11.70</td><td>41.12</td><td>13.86</td></tr><tr><td>\( ICAE_{500}\rightarrow 16 \)</td><td>57.95</td><td>23.20</td><td>44.41</td><td>15.10</td><td>33.88</td><td>7.70</td><td>28.06</td><td>7.20</td><td>29.72</td><td>10.60</td><td>38.80</td><td>12.76</td></tr><tr><td>Absolute Δ</td><td>2.54</td><td>2.40</td><td>3.23</td><td>1.40</td><td>1.62</td><td>0.70</td><td>1.94</td><td>0.10</td><td>2.25</td><td>1.10</td><td>2.31</td><td>1.10</td></tr><tr><td>Relative Δ</td><td>4.38%</td><td>10.34%</td><td>7.29%</td><td>9.27%</td><td>4.78%</td><td>9.09%</td><td>6.91%</td><td>1.38%</td><td>7.59%</td><td>10.37%</td><td>5.97%</td><td>8.62%</td></tr><tr><td>\( Ours_{500}\rightarrow 1 \)</td><td>42.91</td><td>10.30</td><td>32.88</td><td>6.50</td><td>25.82</td><td>3.80</td><td>23.01</td><td>3.60</td><td>24.29</td><td>6.50</td><td>29.78</td><td>6.14</td></tr><tr><td>\( ICAE_{500}\rightarrow 1 \)</td><td>26.87</td><td>3.50</td><td>21.76</td><td>2.30</td><td>20.34</td><td>2.20</td><td>17.35</td><td>1.70</td><td>17.72</td><td>3.60</td><td>20.81</td><td>2.66</td></tr><tr><td>Absolute Δ</td><td>16.04</td><td>6.80</td><td>11.12</td><td>4.20</td><td>5.47</td><td>1.60</td><td>5.65</td><td>1.90</td><td>6.56</td><td>2.90</td><td>8.97</td><td>3.48</td></tr><tr><td>Relative Δ</td><td>59.71%</td><td>194.28%</td><td>51.12%</td><td>182.60%</td><td>26.89%</td><td>72.72%</td><td>32.58%</td><td>111.76%</td><td>37.03%</td><td>80.55%</td><td>43.11%</td><td>130.82%</td></tr></table>
159
+
160
+ Table 3: In-domain QA evaluation results on the ArxivQA dataset with strictly unseen contexts. Length indicates the length of the context to be compressed. F1 and Exact Match (EM) scores are reported across varying context lengths. "Instruct" means full-context performance with instructions, while Lingua denotes LLMLingua-2 baseline. Performance deltas $(\Delta)$ between $500\mathrm{x}$ Compressor (Ours) and ICAE baseline are shown in green (improvements) and red (decreases).
161
+
162
+ <table><tr><td rowspan="2">Dataset Length Eval. Metrics</td><td colspan="2">RE 39 (553)</td><td colspan="2">NaturalQ 258 (2721)</td><td colspan="2">RACE 369 (824)</td><td colspan="2">TextbookQA 729 (963)</td><td colspan="2">TriviaQA 955 (2124)</td><td colspan="2">Average</td></tr><tr><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td></tr><tr><td>Instruct</td><td>71.16</td><td>52.98</td><td>66.30</td><td>39.92</td><td>39.55</td><td>13.94</td><td>45.15</td><td>19.49</td><td>63.80</td><td>41.65</td><td>57.19</td><td>33.60</td></tr><tr><td>\( Lingua_{500}\rightarrow{16} \)</td><td>57.78</td><td>41.58</td><td>40.46</td><td>23.15</td><td>12.58</td><td>5.93</td><td>29.38</td><td>19.16</td><td>56.06</td><td>46.26</td><td>39.25</td><td>27.22</td></tr><tr><td>\( Lingua_{500}\rightarrow{8} \)</td><td>37.85</td><td>23.98</td><td>32.71</td><td>17.94</td><td>9.11</td><td>3.11</td><td>28.67</td><td>17.29</td><td>56.15</td><td>45.58</td><td>32.90</td><td>21.58</td></tr><tr><td>\( Ours_{500}\rightarrow{16} \)</td><td>68.47</td><td>50.06</td><td>45.53</td><td>26.40</td><td>25.53</td><td>10.97</td><td>30.31</td><td>18.36</td><td>43.76</td><td>33.25</td><td>42.72</td><td>27.81</td></tr><tr><td>\( ICAE_{500}\rightarrow{16} \)</td><td>66.03</td><td>44.60</td><td>46.10</td><td>25.18</td><td>24.32</td><td>9.64</td><td>13.27</td><td>4.79</td><td>28.36</td><td>16.78</td><td>35.62</td><td>20.20</td></tr><tr><td>Absolute \( \Delta \)</td><td>2.43</td><td>5.46</td><td>0.57</td><td>1.21</td><td>1.20</td><td>1.33</td><td>17.03</td><td>13.57</td><td>15.40</td><td>16.46</td><td>7.10</td><td>7.61</td></tr><tr><td>Relative \( \Delta \)</td><td>3.69%</td><td>12.24%</td><td>1.24%</td><td>4.82%</td><td>4.95%</td><td>13.84%</td><td>128.30%</td><td>283.33%</td><td>54.32%</td><td>98.08%</td><td>19.94%</td><td>37.66%</td></tr><tr><td>\( Ours_{500}\rightarrow{1} \)</td><td>63.49</td><td>45.72</td><td>41.36</td><td>22.38</td><td>21.37</td><td>7.41</td><td>30.67</td><td>16.83</td><td>54.61</td><td>42.40</td><td>42.30</td><td>26.95</td></tr><tr><td>\( ICAE_{500}\rightarrow{1} \)</td><td>44.49</td><td>27.27</td><td>26.65</td><td>11.21</td><td>14.24</td><td>4.45</td><td>6.19</td><td>2.19</td><td>10.24</td><td>6.37</td><td>20.36</td><td>10.30</td></tr><tr><td>Absolute \( \Delta \)</td><td>18.99</td><td>18.45</td><td>14.70</td><td>11.16</td><td>7.12</td><td>2.96</td><td>24.48</td><td>14.63</td><td>44.37</td><td>36.03</td><td>21.93</td><td>16.64</td></tr><tr><td>Relative \( \Delta \)</td><td>42.70%</td><td>67.66%</td><td>55.17%</td><td>99.52%</td><td>50.03%</td><td>66.41%</td><td>395.24%</td><td>666.14%</td><td>432.97%</td><td>565.32%</td><td>107.70%</td><td>161.58%</td></tr></table>
163
+
164
+ Table 4: Cross-domain QA evaluation results on diverse QA datasets including RelationExtraction (RE), NaturalQuestions (NaturalQ), RACE, TextbookQA, and TriviaQA. Context lengths are reported as average (maximum) token counts.
165
+
166
+ and compression ratio. While lower compression ratios and shorter contexts generally yield better performance, some cross-domain datasets exhibit interesting results. Notably, TextbookQA and TriviaQA show improved performance at $500\rightarrow 1$ compared to $500\rightarrow 16$ compression.
167
+
168
+ Generalization Capability The model's generalization ability is clearly shown in its cross-domain performance. The performance gap between $500\mathrm{x}$ Compressor and ICAE widens at higher compression ratios across all QA datasets. Cross-domain improvements are consistently larger than in-domain gains, reaching up to $107.70\%$ relative improvement in the average F1 at $500\rightarrow 1$ compression.
169
+
170
+ Scalability and Robustness Context length influences performance differently across domains. For in-domain tasks, performance decreases stabilizes with increasing context length. In cross-domain scenarios, longer average context lengths relate with larger improvements, as proved by TextbookQA and TriviaQA showing substantial gains of $395.24\%$ and $432.97\%$ relative improvement respectively over ICAE at $500 \rightarrow 1$ compression.
171
+
172
+ $500\mathrm{x}$ Compressor demonstrates better robustness to increased compression ratios as well, with average F1 scores decreasing by only 0.42 points from $500 \rightarrow 16$ to $500 \rightarrow 1$ , compared to ICAE's 15.26-point reduction.
173
+
174
+ Stability The performance improvements exhibit consistency in both absolute and relative gains across different compression ratios and datasets. This stability is observed in F1 and EM improvements and remains in both in-domain and cross-domain evaluations.
175
+
176
+ # 4.4 Case Study
177
+
178
+ Table 5 presents comparative examples of text regeneration and QA pairs among $500\mathrm{x}$ Compressor and baselines. The examples verify previous findings that baselines demonstrate higher rates of information loss, mistakes, and hallucinations compared to $500\mathrm{x}$ Compressor in both text regeneration and QA tasks.
179
+
180
+ While regeneration quality typically relates with QA performance, the examples reveal exceptions where accurate text regeneration leads to incorrect QA responses, and imperfect regeneration pro
181
+
182
+ <table><tr><td>Original</td><td>500xCompressor (ours)</td><td>ICAE (baseline)</td></tr><tr><td>We show that every reciprocity sheaf gives rise to a cycle (pre)module in the sense of Rost over a perfect field. Over a perfect field of positive characteristic, we show that the first cohomology group of a logarithmic de Rham-Witt sheaf has a partial cycle module structure. As a consequence, we show that Kato complexes of logarithmic de Rham-Witt sheaves satisfy functoriality properties similar to Rost&#x27;s cycle complexes.</td><td>We show that every reciprocity sheaf gives rise to a cycle (pre)module in the sense of Rost over a perfect field. Over a perfect field of positive characteristic, we show that the first co-homology group of a logarithmic de Rham-Witt cycle module has a partial cycle structure. As a consequence, we show that Kato modules of logarithmic de Rham-Witt complexes satisfy functorial properties similar to Rost&#x27;s cycle complexes.</td><td>We show that every sheaf reciprocity gives rise to a cycle (pre)module in the sense of Rost over a perfect field Rost cycle. In the perfect field case, we show that over a positive characteristic field, the first logarithmic de Rham-Witt sheaf cohomology group of a Witt log-Witt cycle has a partial cycle module structure decomposition. As a consequence, we show that Kato&#x27;s cycle complexes of logarithmic de Rham-Witt sheaves satisfy functoriality properties similar to Rost cycle complexes.</td></tr><tr><td>Q: What type of sheaf gives rise to a cycle premodule?A: Every reciprocity sheaf.</td><td>A: a reciprocity sheaf</td><td>A: a reciprocity sheaf of (logarithmic) differential forms</td></tr><tr><td>Q: Over what type of field do we show that Kato complexes satisfy functoriality properties?A: Over a perfect field of positive characteristic.</td><td>A: perfect fields of positive characteristic</td><td>A: a perfect field of positive characteristic zero</td></tr><tr><td>Q: What is the structure of the first cohomology group of a logarithmic de Rham-Witt sheaf?A: a partial cycle module structure</td><td>A: a partial cycle complex</td><td>A: a partial Kato cycle complex</td></tr><tr><td>Q: What type of complexes satisfy functoriality properties similar to Rost&#x27;s cycle complexes?A: Kato complexes of logarithmic de Rham-Witt sheaves</td><td>A: Kato-Witt cycle complexes of logarithmic de Rham-Witt sheaves</td><td>A: Kato&#x27;s complexes of logarithmic de Rham-Witt sheaves</td></tr></table>
183
+
184
+ duces correct answers. This observation highlights the relationship between compression quality and task performance.
185
+
186
+ # 4.5 Ablation Studies
187
+
188
+ The performance of compression models is influenced by several variables, including the compression method (ICAE or 500xCompressor), task type (in-domain or cross-domain datasets), context length (length of text to be compressed), and compression ratio (number of compression tokens). The influences of these variables have been discussed in Section 4.3. To further analyze the influence of semantic information, we compare performance on original ArxivCorpus texts versus semantically meaningless texts.
189
+
190
+ Table 6 demonstrates that semantic understanding improves compression quality, with $500\mathrm{x}$ Compressor achieving $99.48\%$ BLEU score on ArxivCorpus texts compared to $11.77\%$ on random texts. The performance gap keeps in semantically meaningless scenarios, where $500\mathrm{x}$ Compressor main
191
+
192
+ Table 5: Examples of regenerated texts and QA pairs provided by $500\mathrm{x}$ Compressor and ICAE. 96 tokens of the original text were compressed, which were then used for QA. Differences between the gold standard and the output include mistakes (red, containing incorrect text), information loss (yellow and italic, missing some text), hallucinations (blue, including text not present in the original), and paraphrasing (green, rephrasing the original text).
193
+
194
+ <table><tr><td rowspan="2">Dataset Length</td><td colspan="2">Arxiv</td><td colspan="2">Random</td></tr><tr><td>96</td><td>192</td><td>96</td><td>192</td></tr><tr><td>Ours500→16</td><td>99.48</td><td>96.21</td><td>11.77</td><td>2.78</td></tr><tr><td>ICAE500→16</td><td>81.85</td><td>55.90</td><td>2.06</td><td>0.84</td></tr><tr><td>Absolute Δ</td><td>17.62</td><td>40.31</td><td>9.70</td><td>1.93</td></tr></table>
195
+
196
+ Table 6: Text regeneration performance (BLEU scores) on semantic (ArxivCorpus) and non-semantic (Random) texts. Random texts are generated by increasing each token ID from the ArxivCorpus texts by one position. Arxiv is ArxivCorpus.
197
+
198
+ tains an advantage over ICAE (11.77% versus 2.06%), showing robust and improved preservation of both semantic and format-related information.
199
+
200
+ # 5 Discussions
201
+
202
+ The differences between 500xCompressor and ICAE could be better understood by comparing them to Prompt Tuning (Lester et al., 2021) and Prefix Tuning (Li and Liang, 2021). In Prompt Tuning, prefixed special tokens are trained to guide the model in completing specific downstream tasks.
203
+
204
+ Similarly, ICAE compresses contexts into prefixed special tokens for downstream tasks. Unlike Prompt Tuning, Prefix Tuning also trains the KV values associated with the prefixed special tokens. 500xCompressor, akin to Prefix Tuning, compresses texts into the KV values of prefixed special tokens. In Prompt Tuning or Prefix Tuning, the prefixed special tokens (and their KV values) only save the instruction for the downstream task. However, in ICAE and 500xCompressor, these tokens compress detailed information within the context in addition to the instruction.
205
+
206
+ There are three ways to understand the compressed tokens generated from natural language tokens: as memory, a new information format, and a new LLM language. Ge et al. (2024) associated text compression with working memory, viewing compressed tokens as an efficient way for LLMs to store knowledge. Cheng et al. (2024) interpreted text compression as a new information format, where compressed tokens, combined with natural language tokens, provide more information and have higher information richness. Jiang et al. (2023a) treated the compressed prompt as a new language for LLM. There are three elements that define a language: saving information, transmitting information, and adaptive evaluation. The compressed tokens could regenerate the original text, indicating that the information has been saved. Furthermore, these tokens can be used for downstream tasks and answer related questions, demonstrating their ability to transfer information. The ability of the compression models to process unseen texts further shows their generalization ability and adaptability. These characteristics make compressed tokens an efficient new language for LLMs.
207
+
208
+ # 6 Related Work
209
+
210
+ This work is related to prompt compression. There are two main approaches to reducing the number of prompt tokens: hard prompts and soft prompts.
211
+
212
+ Hard prompt methods identify and delete low-information content in the prompt. Li et al. (2023) proposed SelectiveSentence in 2023, which identifies rich-information content at the sentence or word level. Later, Jiang et al. (2023a) proved that LLMs could understand incomplete words or sentences, leading to the development of LLMLingua, LongLLMingua, and LLMLingua-2, which delete useless tokens even if fluency is interrupted (Jiang et al., 2023a,b; Hu et al., 2022).
213
+
214
+ Soft prompt methods compress natural language tokens into a small number of special tokens. Wingate et al. (2022) optimized the difference between the answers generated by the original prompt and the compressed prompt, but this method lacked generalization, requiring training for each new prompt. Mu et al. (2024) solved this by proposing GIST tokens, but their limitations included the need for fine-tuning the original LLM and the short length of prompts to be compressed, typically less than thirty tokens. ICAE solved these issues by pretraining the compression model and avoiding additional parameters during decoding, allowing compression of texts up to around 500 tokens without changing the original LLM (Ge et al., 2024). However, the maximum compression ratio of ICAE is about $15\mathrm{x}$ . To increase the text length for compression, Chevalier et al. (2023) proposed AutoCompressor, which progressively compresses the prompt but, like GIST tokens, is limited to finetuned LLMs and a complex training process. Other works analyze text compression within paragraphs (Ren et al., 2023). Soft prompts are also applied in RAG through xRAG and COCOM (Cheng et al., 2024; Lau et al., 2024).
215
+
216
+ It is worth noting that $500\mathrm{x}$ Compressor is fundamentally a prompt compression method based on the soft prompt rather than a KV cache compression approach. While the KV values of compression tokens are used for inference, they remain unchanged throughout the process, with all compression processes done on the input prompts.
217
+
218
+ # 7 Conclusions
219
+
220
+ This paper proposes 500xCompressor, a prompt compression method capable of compressing any text and all tokens within it. 500xCompressor achieves a high compression ratio while retaining most capabilities of non-compressed prompts. This method proves that current prompts are highly compressible, developing further direction in compression applications.
221
+
222
+ Future work would involve applications such as in-context learning, personalization, and RAG. 500xCompressor has shown good generalization ability on cross-domain tasks and increasing the size and diversity of the training data is expected to make 500xCompressor be able to finish more tasks (for example, tasks requiring flexible formats and long outputs) and achieve better results.
223
+
224
+ # Limitations
225
+
226
+ A consideration in our work was the careful selection of training data to avoid copyright issues. We chose to use the ArxivCorpus rather than datasets like Pile, as Arxiv papers are officially uploaded with clear copyright through Cornell University. Future development should also carefully consider copyright when using different datasets.
227
+
228
+ # Ethics Statement
229
+
230
+ No ethical approval was required for this study.
231
+
232
+ # Availability Statement
233
+
234
+ The codes related to this study have been uploaded to the open source community at https://github.com/ZongqianLi/500xCompressor.
235
+
236
+ # References
237
+
238
+ Xin Cheng, Xun Wang, Xingxing Zhang, Tao Ge, Si-Qing Chen, Furu Wei, Huishuai Zhang, and Dongyan Zhao. 2024. xrag: Extreme context compression for retrieval-augmented generation with one token. arXiv preprint arXiv:2405.13792.
239
+ Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. In The 2023 Conference on Empirical Methods in Natural Language Processing.
240
+ Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eun-sol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP.
241
+ Tao Ge, Hu Jing, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. 2024. In-context autoencoder for context compression in a large language model. In The Twelfth International Conference on Learning Representations.
242
+ Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, ..., and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783.
243
+ Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
244
+ Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023a. LLMLingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro
245
+
246
+ cessing, pages 13358-13376, Singapore. Association for Computational Linguistics.
247
+ Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023b. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. arXiv preprint arXiv:2310.06839.
248
+ Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.
249
+ Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5376-5384.
250
+ Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466.
251
+ Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.
252
+ Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
253
+ Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333-342, Vancouver, Canada. Association for Computational Linguistics.
254
+ Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online. Association for Computational Linguistics.
255
+
256
+ Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin. 2023. Compressing context to enhance inference efficiency of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6342-6353, Singapore. Association for Computational Linguistics.
257
+
258
+ Jesse Mu, Xiang Lisa Li, and Noah Goodman. 2024. Learning to compress prompts with gist tokens. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Red Hook, NY, USA. Curran Associates Inc.
259
+
260
+ Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Ruhle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, and Dongmei Zhang. 2024. LLMingua-2: Data distillation for efficient and faithful task-agnostic prompt compression. In Findings of the Association for Computational Linguistics: ACL 2024, pages 963–981, Bangkok, Thailand. Association for Computational Linguistics.
261
+
262
+ David Rau, Shuai Wang, Hervé Déjean, and Stephane Clinchant. 2024. Context embeddings for efficient answer generation in rag. arXiv preprint arXiv:2407.09252.
263
+
264
+ Siyu Ren, Qi Jia, and Kenny Q. Zhu. 2023. Context compression for auto-regressive transformers with sentinel tokens. In The 2023 Conference on Empirical Methods in Natural Language Processing.
265
+
266
+ David Wingate, Mohammad Shoeybi, and Taylor Sorensen. 2022. Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5621-5634, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
267
+
268
+ # A Appendices
269
+
270
+ # A.1 Model Training
271
+
272
+ The training parameters for $500\mathrm{x}$ Compressor and ICAE are detailed in Table 7. Main packages are torch 2.3.1 and transformers 4.42.3, and the full environment can be got from Section 7. The evaluation losses for both models are illustrated in Figure 4. All models have successfully converged, with $500\mathrm{x}$ Compressor demonstrating better performance compared to ICAE, as indicated by the evaluation loss.
273
+
274
+ <table><tr><td></td><td colspan="2">Pretraining</td><td colspan="2">Finetuning</td></tr><tr><td></td><td>500→16</td><td>500→1</td><td>500→16</td><td>500→1</td></tr><tr><td>Total steps</td><td>42000</td><td>103800</td><td>20000</td><td>10000</td></tr><tr><td>Warm-up steps</td><td>300</td><td>300</td><td>300</td><td>300</td></tr><tr><td>Learning rate</td><td>1e-4</td><td>1e-4</td><td>5e-5</td><td>5e-5</td></tr><tr><td>Batch size</td><td>4</td><td>4</td><td>4</td><td>4</td></tr><tr><td>Optimizer</td><td>AdamW</td><td>AdamW</td><td>AdamW</td><td>AdamW</td></tr></table>
275
+
276
+ Table 7: Training parameters for ${500}\mathrm{x}$ Compressor and ICAE.
277
+
278
+ # A.2 ArxivCorpus and ArxivQA Dataset
279
+
280
+ Source of Arxiv abstracts in the ArxivCorpus: https://www.kaggle.com/datasets/ Cornell-University/arxiv
281
+
282
+ The detailed information for ArxivCorpus and the ArxivQA dataset is shown in Table 8.
283
+
284
+ The prompt to generate the QA pairs:
285
+
286
+ ```jsonl
287
+ context: {truncated_context}
288
+ task: design the {number} best extractive question answering pairs for the context to test information loss
289
+ requirement: the question should be direct; the question should try to use the same words in the context; the answer should directly appear in the context (a span of the context); the answer should not be in the question; just output the results in format and do not output other words
290
+ output json format: {{"id":1, "question": "", "answer": "", {"id":2, "question": "", "answer": "", ...]}
291
+ ```
292
+
293
+ # A.3 Question Answering
294
+
295
+ Prompt for QA (Instruct):
296
+
297
+ Please finish the extractive question answering task. Just output the answer. Context: {context} Question: {question} Answer:
298
+
299
+ This paper and codes are helped with ChatGPT and Claude.
300
+
301
+ <table><tr><td rowspan="2"></td><td rowspan="2">Train</td><td colspan="2">ArxivCorpus</td><td rowspan="2">Test</td><td rowspan="2">Train</td><td colspan="2">ArxivQA Dataset</td></tr><tr><td>Development</td><td>Test</td><td>Development</td><td>Test</td></tr><tr><td>Number of data records</td><td>2353924</td><td>3000</td><td>2500</td><td>250000</td><td>2500</td><td>5000</td><td></td></tr><tr><td>Knowledge cutoff</td><td>Pre 07/2023</td><td>01-04/2024</td><td>01-04/2024</td><td>Pre 07/2023</td><td>Pre 07/2023</td><td>01-04/2024</td><td></td></tr><tr><td>Source</td><td colspan="3">Abstracts from Arxiv</td><td colspan="2">Train set of ArxivCorpus</td><td colspan="2">Test set of ArxivCorpus</td></tr></table>
302
+
303
+ Table 8: Detailed information about the ArxivCorpus and the ArxivQA dataset.
304
+
305
+ ![](images/7cb29487d695366e175164d40545b5b427702c9c19330299a1cd51a9fb90a856.jpg)
306
+
307
+ ![](images/7008ad98fd17267dd92b29d19a5d5ecb1d696404012dd93e28bc076346b8035a.jpg)
308
+
309
+ ![](images/f21586cd35c00d7a2c3b3018ba052cfb80c0ee80e5de72b771b46e59bafdf4bd.jpg)
310
+ Figure 4: Evaluation loss for $500\mathrm{x}$ Compressor and ICAE during pretraining and fine-tuning.
311
+
312
+ ![](images/012fc9c94649c76f4d51f57df79cf4e666883a970a66ec85c677735163629845.jpg)
2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63ac6c91ee5d160cf907fca2314be2a0856b92a3442651fb6504772e9a3fd798
3
+ size 803119
2025/500xCompressor_ Generalized Prompt Compression for Large Language Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/cf0b9bfd-fca4-45d1-9d19-70a61385e078_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/cf0b9bfd-fca4-45d1-9d19-70a61385e078_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/cf0b9bfd-fca4-45d1-9d19-70a61385e078_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:502e02a8253c560515fa9350cdfd0b457b981745892860e2b7f5becda8ed7361
3
+ size 726405
2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/full.md ADDED
@@ -0,0 +1,634 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models
2
+
3
+ Jiesong Liu $^{\diamond}$ , Brian Park $^{\diamond}$ , Xipeng Shen $^{\diamond}$ , Department of Computer Science, North Carolina State University jliu93@ncsu.edu, bcpark@ncsu.edu, xshen5@ncsu.edu
4
+
5
+ # Abstract
6
+
7
+ Large Language Models (LLMs) are cutting-edge generative AI models built on transformer architecture, which tend to be highly memory-intensive when performing real-time inference. Various strategies have been developed to enhance the end-to-end inference speed for LLMs, one of which is speculative decoding. This technique involves running a smaller LLM (draft model) for inference over a defined window size, denoted as $\gamma$ , while simultaneously being validated by the larger LLM (target model). Choosing the optimal $\gamma$ value and the draft model is essential for unlocking the potential of speculative decoding. But it is difficult to do due to the complicated influence from various factors, including the nature of the task, the hardware in use, and the combination of the large and small models. This paper introduces on-the-fly adaption of speculative decoding, a solution that dynamically adapts the choices to maximize the efficiency of speculative decoding for LLM inferences. As a drop-in solution, it needs no offline benchmarking or training. Experiments show that the solution can lead to $3.55 - 16.48\%$ speed improvement over the standard speculative decoding, and $1.2 - 3.4\times$ over the default LLMs.
8
+
9
+ # 1 Introduction
10
+
11
+ Large Language Models (LLMs) are state-of-the-art generative AI models built on transformer-based blocks (Brown et al., 2020; Ouyang et al., 2022). LLMs have an enormous number of parameters, and recent research not only focuses on training them efficiently but also explores how to optimize inference performance. In fact, there is evidence indicating that even small improvement in LLM inference speeds can result in significant cost savings. For instance, Google's infrastructure optimizations have demonstrated that improving inference efficiency can lead to substantial reductions in operational expenses. In large-scale deployments, a $1\%$
12
+
13
+ increase in speed can indeed translate into millions of dollars saved (AI, 2023; Cloud, 2023).
14
+
15
+ Due to the autoregressive and memory-intensive nature of LLMs, it is challenging to optimize its inference throughput. Sampling for a new token depends on the previously generated tokens. Researchers are exploring mainly two approaches to circumvent this sequential dependence for more efficient parallel executions. One is to change the model architecture thus sampling granularity to parallelize the decoding process. Medusa (Cai et al., 2024), for example, introduces multiple decoding heads to generate tokens in parallel; Lookahead Decoding (Jacobi Decoding) (Fu et al., 2024) generates multiple tokens in parallel using nonlinear systems. This approach changes the neural architecture and hence requires new training, the high costs of which makes them difficult to adopt in practice. The other approach is speculative decoding (Leviathan et al., 2023; Chen et al., 2023). This approach first runs inference with a smaller LLM $M_{q}$ , called the draft model, to generate the next $\gamma$ tokens ( $\gamma$ is called speculation window size). After generating one window of tokens (called a speculation step), a verification step uses the Large LLM $M_{p}$ , called the target model, to validate those tokens in parallel. Upon finding the first incorrect token, the execution throws away the rest of the tokens speculated by the draft model in that window and corrects the first rejected token (or appends a new token when all of the tokens are accepted). From there, it continues the speculation-validation process. This approach allows direct use of the pretrained LLMs, making it easier for adoption.
16
+
17
+ What is crucial for unlocking the potential of speculative decoding is to choose the best speculation window length, $\gamma$ , and the best draft model to use. The best choices depend on the nature of the inference task, target model, software stack, hardware, and resource availability or workload changes (if running in a cloud). Suboptimal choices
18
+
19
+ may not only substantially throttle the benefits but sometimes cause slowdowns to the inference (see Section 6). The standard approach (Leviathan et al., 2023; Chen et al., 2023) relies on offline trial-and-error-based search, which not only takes long time, but more importantly, cannot adapt to the changes in the tasks, target models, software stacks, hardware or other runtime changes. A recent study, SpecDec++ (Huang et al., 2024), attempts to improve it through a machine learning model. It trains a ResNet with many samples in offline data collection, and uses it to predict, at each generated token in actual inferences, whether the execution should stop speculation, so as to adapt the speculation window. Although the work shows some improvement in experiments, it requires hundreds or thousands of GPU-hours (Section 6.2) to train the model for one target-draft pair on one kind of task and one software and hardware configuration. Modern LLM servers often host many LLMs and their variants (e.g., different quantizations, with Lora or other fine-tuning models) and have various software and hardware configurations and task types, making the solution difficult to adopt in production systems.
20
+
21
+ This paper describes the first-known exploration of on-the-fly adaptive speculation, a drop-in solution that adapts speculative decoding at runtime without ahead-of-time training. Our exploration covers both speculation window size $\gamma$ and the choice of draft models. It experiments with several agile online methods for the adaptation, including a state machine-based mechanism, a cache-enabled state machine-based method, a reinforcement learning-based approach, and a token accuracy-based online window size optimization method. It analyzes these methods and evaluates them on four LLMs across three GPU models and four types of inference tasks. The results show that on-the-fly adaptive speculation, especially the online window size optimization, can deliver similar or even better improvements than the prior method that uses extensive ahead-of-time trainings, leading to 3.55-16.48% speed improvement over the standard speculative decoding, and 1.2-3.4× over the default LLMs. As a drop-in solution, this new approach needs no model changes, ahead-of-time preparation, lengthy training, or extensive benchmarking. It automatically adapts the optimal window size and directs the requests to the appropriate draft models for speculation, especially suitable for large LLM service providers.
22
+
23
+ It is worth mentioning that besides adapting the
24
+
25
+ speculation process, there are some other methods explored in recent studies to improve speculative decoding (Li et al., 2024; Yan et al., 2024; Spector and Re, 2023; Hooper et al., 2023). Online Speculative Decoding (Liu et al., 2023), for instance, uses knowledge distillation to continuously train the smaller draft model during inference, enhancing performance. SpecInfer (Miao et al., 2023) introduces a tree-based decoding algorithm that uses the draft model to speculate multiple possible token sequences in parallel and then validates each of these sequences by the target model to keep the longest validated one. The on-the-fly adaptive speculation proposed in this current paper is from a different angle. It is hence complementary to those studies in the sense that it can be integrated into the speculation process in those solutions to further improve their effectiveness.
26
+
27
+ # 2 Guess-and-Verify in LLMs
28
+
29
+ In LLM inference, the tokens generated later are dependent on the tokens generated earlier. This sequential dependency of autoregressive decoding in LLMs has led to the development of new techniques aimed at parallelizing the decoding process. Given that text is tokenized, some tokens can be easier or harder to predict by a lower-parameter LLM. This has sparked a new area of research known as "guess-and-verify" optimization (Li et al., 2024; Yan et al., 2024; Spector and Re, 2023; Hooper et al., 2023). In this approach, smaller draft models efficiently guess a number of tokens, which are then verified in parallel by a larger target model. It is a lossless optimization, maintaining the accuracy of the results.
30
+
31
+ Speculative decoding is one typical "guess-and-verify" approach in LLM optimization. In this technique, when an LLM samples logits, it essentially predicts the probabilities of the next token. Speculative decoding takes advantage of this by allowing a smaller model to guess the easier tokens based on its own sampling of the distribution. These tokens are then verified by a larger, more accurate model.
32
+
33
+ In speculative decoding, the process involves guessing a set of tokens using the smaller model $M_q$ within a fixed window size, $\gamma$ , and then verifying these $\gamma$ tokens using the larger model $M_p$ by sampling $\gamma + 1$ tokens in parallel. If all tokens are accepted, the $\gamma + 1$ tokens are appended to the generated sequence, and the process continues. If one token (say $(i + 1)th$ ) is rejected, the algo
34
+
35
+ rithm accepts the $i$ correct tokens, resample the $(i + 1)th$ from an adjusted distribution in the validation, and continues the next round of guessing. The speculation and verification process is detailed in Algorithm 1.
36
+
37
+ Algorithm 1 Speculative Decoding (Leviathan et al., 2023)
38
+ 1: function speculativeDecoding $(M_p, M_q, prefix)$
39
+ 2: ▷ Sample $y$ guesses $x_1, \ldots, y$ from $M_q$ autoregressively.
40
+ 3: for $i = 1$ to $y$ do
41
+ 4: $q_i(x) \sim M_q(prefix + [x_1, \dots, x_{i-1}])$
42
+ 5: $x_i \sim q_i(x)$
43
+ 6: Run $M_p$ in parallel.
44
+ 7: $(p_1(x), \dots, p_{y+1}(x)) \gets M_p(prefix), \dots, M_p(prefix + [x_1, \dots, x_y])$
45
+ 9: Determine the number of accepted guesses $n$ .
46
+ 10: $r_1 \sim U(0, 1), \dots, r_y \sim U(0, 1)$
47
+ 11: $n \gets \min(\{i - 1 | 1 \leq i \leq y, r_i > \frac{p_i(x)}{q_i(x)}\} \cup \{y\})$
48
+ 12: Adjust the distribution from $M_p$ if needed.
49
+ 13: $p'(x) \gets p_{n+1}(x)$
50
+ 14: if $n < y$ then
51
+ 15: $p'(x) \gets \text{norm}(\max(0, p_{n+1}(x) - q_{n+1}(x)))$
52
+ 16: Return one token from $M_p$ and $n$ tokens from $M_q$ .
53
+ 17: $t \sim p'(x)$
54
+ 18: return prefix + $[x_1, \dots, x_n, t]$
55
+
56
+ # 3 Overview
57
+
58
+ Our goal is to enable real-time adjustments in speculative decoding to achieve higher throughput without requiring extensive pre-training, making it a practical solution for large-scale LLM deployments. Figure 1 gives an overview of our solution. The workflow goes as follows. At the beginning, it sets up the target model and different draft model options. For each prompt, our solution as in Figure 1 involves two steps. First, it finds a proper draft model for the given prompt. This is done by extracting features of the prompt to estimate the single token accuracy. From there, the method approximates the acceptance rate and ultimately the throughput so it can choose a proper draft model. Second, it runs speculations, where $\gamma$ is adapted on the fly with the given model pairing. In the following content, we will first introduce the adaptive window size selection (Section 4) followed by adaptive draft model selection (Section 5). A detailed workflow example is shown in the bottom of Figure 1 and is explained in Appendix C.2.
59
+
60
+ # 4 Adaptive Window Size Selection
61
+
62
+ In this section, we focus on how to determine the best window size for a given target-draft model pair. We first introduce the analytic model for capturing
63
+
64
+ the relationship between the speculation setting and speculation benefits. With that, we present an analytical model-guided adaption (Section 4.1) and three other agile algorithms for adaptively changing $\gamma$ during speculative decoding (Section 4.2). The agility of these algorithms is essential for minimizing the runtime overhead.
65
+
66
+ # 4.1 Method 1: Analytical Model-Guided Adaption
67
+
68
+ A speculation window size that is too large risks high overhead if verification fails early, while a size that is too small misses out on the full benefits. The optimal size varies depending on the language model, contexts, and speculation accuracy. We translate this trade-off into an objective function to adaptively determine the optimal window size across various configurations. For each prompt, we want to minimize the end-to-end latency in generating a response with a fixed number of tokens. We define our objective function as the expected number of tokens verified as correct per unit time, aiming to maximize this function by optimizing the window size $\gamma$ :
69
+
70
+ Definition 1 (formulating objective). Let $a_{q}$ represent the latency of generating one token by the draft model, and $b_{p}(\gamma)$ represent the latency of a verification step with window size $\gamma$ . For $t = 1,2,\dots$ , let $Acc(x_{t}|X_{<t})$ be the accuracy of the speculation of a single token given the current context $X_{<t} = \{x_{1},\dots ,x_{t - 1}\}$ . The window size $\gamma$ for the current speculation step can be determined by optimizing the objective
71
+
72
+ $$
73
+ \mathcal {G} = \max _ {\gamma} \frac {1 - A c c \left(x _ {t} \mid X _ {< t}\right) ^ {\gamma + 1}}{\left(1 - A c c \left(x _ {t} \mid X _ {< t}\right)\right) \left(\gamma a _ {q} + b _ {p} (\gamma)\right)}. \tag {1}
74
+ $$
75
+
76
+ Given the single token accuracy $\beta = Acc(x_{t}|X_{< t})\in [0,1]$ , the expected accepted number of tokens in a $\gamma$ -long speculation window follows truncated geometric distribution, and is given as $\frac{1 - \beta^{\gamma + 1}}{1 - \beta}$ (see Appendix B.1). The total latency of one speculation step and verification step is calculated as $\gamma a_q + b_p(\gamma)$ . Therefore, the expected number of tokens verified as correct per unit time given a window size $\gamma$ is given by $\frac{1 - \beta^{\gamma + 1}}{(1 - \beta)(\gamma a_q + b_p(\gamma))}$ , and thus objective (1).
77
+
78
+ Algorithm. To adaptively determine the optimal $\gamma$ , we need to figure out the unknown terms $a_{q}, b_{p}(\gamma), Acc(x_{t}|X_{<t})$ in Equation 1. Using estimation for them, the algorithm goes as follows. At the start of each speculation step, it conducts the
79
+
80
+ ![](images/d443b66907765da6a33d3896a31abeb3fc9ab2a632be44b5901d300e6f835e01.jpg)
81
+ Figure 1: Our on-the-fly adaptive speculation framework. When a prompt arrives, our scheduler directs it to the draft model $M_{q}$ . During speculation, our framework automatically adapts the right speculation window size $\gamma$ . The speculation is then validated by the target model $M_{p}$ .
82
+
83
+ following two operations before it can solve the objective (1). First, it estimates $a$ and $b$ . These values are derived by observing the most recent steps. Second, it estimates $Acc(x_{t}|X_{< t})$ based on the recent history. We use maximum likelihood estimation over the last $h$ speculations, ensuring the estimate $\widehat{A}cc$ reflects both locality and reduced variance (details in Appendix B.2). In our algorithm, we let $\gamma (j)$ be the speculation window size during the $j$ -th most recent verification step, and $V(\gamma (j),X_{< t_j})$ the number of accepted tokens in this speculation window. We estimate $Acc(x_{t}|X_{< t})$ as
84
+
85
+ $$
86
+ \frac {\sum_ {j} V (\gamma (j) , X _ {< t _ {j}})}{\sum_ {j} V (\gamma (j) , X _ {< t _ {j}}) + \sum_ {j} \mathbf {1} \left(V (\gamma (j) , X _ {< t _ {j}}) < \gamma (j)\right)} \tag {2}
87
+ $$
88
+
89
+ where $\mathbf{1}(\cdot)$ is the indicator function. To avoid overly optimistic estimates and potential division-by-zero error when $\hat{A}cc$ gets close to 1, we set a fixed upper limit, $Acc_{\max}$ (e.g., 0.98), and cap $\hat{A}cc$ at this value.
90
+
91
+ Analysis. Theorem 1 gives a direct comparison of the error bound of the analytical model-guided adaption and that of the fixed window size speculation, where the gamma value is determined from offline profiling data before real deployment, showing the superior theoretical results of our method in estimating the single token accuracy. The proofs are detailed in Section A.1.
92
+
93
+ Theorem 1. Let $\beta$ be the true acceptance probability of speculative decoding steps, and let $\widehat{\beta}_{\text{adaptive}}$ and $\widehat{\beta}_{\text{fixed}}$ be the estimators obtained from the analytical model-guided adaption and fixed window selection methods, respectively. Then the variance of the adaptive estimator satisfies:
94
+
95
+ $$
96
+ \operatorname {V a r} \left(\widehat {\beta} _ {\text {a d a p t i v e}}\right) \leq \operatorname {V a r} \left(\widehat {\beta} _ {\text {f i x e d}}\right).
97
+ $$
98
+
99
+ Moreover, the expected absolute estimation error obeys:
100
+
101
+ $$
102
+ \mathbb {E} \left[ | \widehat {\beta} _ {\text {a d a p t i v e}} - \beta | \right] \leq \mathbb {E} \left[ | \widehat {\beta} _ {\text {f i x e d}} - \beta | \right].
103
+ $$
104
+
105
+ # 4.2 Other Drop-in Speculation Methods
106
+
107
+ Besides the analytic model-guided adaption, we have explored three other methods for on-the-fly $\gamma$ adaption.
108
+
109
+ Method 2: Finite State Machine (FSM)-Based Speculation. A finite state machine-based predictor (Hennessy and Patterson, 2017) is similar to an $n$ -bit saturating counter used in branch prediction. The mechanism works by decreasing $\gamma$ by 1 if a token from the draft model is rejected, and increasing $\gamma$ by 1 if all tokens are accepted. During benchmarking, we still select a value for $\gamma$ , but it is considered an upper limit, $\gamma_{\mathrm{max}}$ . If the draft and target models' distributions significantly differ, $\gamma$ will remain low, potentially even at 0. Conversely, if the models align closely, $\gamma$ should increase, approaching $\gamma_{\mathrm{max}}$ . We consider this approach particularly effective for natural language processing because certain parts of a sentence—like common phrases or syntactically predictable structures—are easier for a smaller draft model to predict. In contrast, more unique or complex sub-sequences generated by the LLM might be harder to guess. We show some examples in Appendix C.1. By adaptively changing $\gamma$ based on the previous token validations, we create a reward system that exploits patterns and predictable structures in autoregressive generation.
110
+
111
+ Method 3: Cache-Enabled FSM-Based Speculation. We adjust $\gamma$ based on the context provided by the prompt and the history of generated tokens. In settings like question-answering, an LLM often reiterates or directly responds based on the con
112
+
113
+ text given by the user. Therefore, the user's input can inform predictions about the type of response the LLM will generate. Specifically, this approach includes a token cache that updates after every sampling step. Initially, the cache is populated with tokens in the prompt, set up before the prefill stage. As new tokens are sampled and validated during speculation, the cache is updated with any previously unseen tokens. $\gamma$ is then adjusted dynamically: It increases by one if a validated token is already in the cache, and by an additional one when all speculated tokens are accepted. Conversely, if none of the accepted tokens are in the cache, it decreases by one. We see that this approach is particularly effective for structured tasks like QA chatbot interactions or code completion, where context and history play a significant role. However, it may be less effective for short prompts expecting broad and diverse content, such as tasks that require informative or creative responses. In such cases, the lack of initial context or history means the cache is less informative, making $\gamma$ adjustments less effective, potentially leading to performance similar to the more simplistic state-based adaptation.
114
+
115
+ Method 4: Reinforcement Learning-Based Speculation. We in addition explored a reinforcement learning-based approach. We use a Q-learning agent to choose $\gamma$ . The modification to the algorithm is detailed in Algorithm 2 in Appendix B.4. The agent takes the previous states of $\gamma$ as inputs and applies an action after each validation step.
116
+
117
+ # 5 Adaptive Draft Model Selection
118
+
119
+ Besides the speculation window size, the selection of the draft model also makes a difference: A smaller draft model can make faster inferences but at the risk of a low acceptance rate, while a larger draft model renders a longer latency. To dive deeper into the problem, we analyze the relationship between the throughput of the adaptive speculation and the acceptance rate in Theorem 2. Proofs are detailed in Appendix A.2.
120
+
121
+ Theorem 2. Let $L$ be the length of the answer to a prompt and is fixed, $n$ be the total number of speculation steps. Let acceptance rate $\rho_q$ be the number of accepted tokens divided by the total number of tokens sampled by $M_q$ . The throughput $(R)$ can be formulated as
122
+
123
+ $$
124
+ R = \frac {L}{b _ {p} (\gamma) n + a _ {q} \frac {L}{\rho_ {q}}}. \tag {3}
125
+ $$
126
+
127
+ As answer length $L$ in Equation 3 is considered constant in our setting, the main influence for choosing a draft model comes from draft model latency $a$ , target model latency $b(\gamma)$ , the acceptance rate $\rho$ , and the number of speculation steps $n$ .
128
+
129
+ Influence of selecting a larger draft model. Let $c$ represent the inference latency ratio between the draft model and the target model. Choosing a larger draft model increases the single token accuracy, $\alpha = \mathbb{E}(Acc(x_t|X_{<t}))$ , and the draft latency $a$ . We estimate $d = \mathbb{E}(\gamma)$ by finding the numerical integer solution in objective 1. With $\alpha$ and the corresponding $d$ , the acceptance rate $\rho = \frac{1 - \alpha^{\gamma + 1}}{(1 - \alpha)d}$ can be determined, as shown by scattered dots in Figure 2.
130
+
131
+ ![](images/f69242ccbd1a7032c2b4554491fdeb56d7f80bcb8a228e82fbb59c9c1f30676a.jpg)
132
+ Figure 2: Results for the acceptance rate and the denominator in $n = \frac{L}{d \cdot \rho}$ across different single-token accuracy $(\alpha)$ and draft-to-target model size ratios $(c)$ .
133
+
134
+ We now analyze the influence on the number of speculation steps $n = \frac{L}{d \cdot \rho}$ . The lines in Figure 2 illustrate how the denominator of $n$ changes as $\alpha$ varies, reflecting the product of $d$ and $\rho$ .
135
+
136
+ Theorem 3. Let $\Delta n$ represent the reduction in speculation steps due to a larger draft model, $\Delta c$ the increase in latency ratio, and $\Delta \rho$ the improvement in acceptance rate. As long as the following condition holds:
137
+
138
+ $$
139
+ \Delta n > \frac {\Delta c}{\Delta \rho} L, \tag {4}
140
+ $$
141
+
142
+ the larger draft model would lead to a higher overall throughput than the smaller draft model.
143
+
144
+ A deeper look into formula 4 gives us that, when comparing two draft models, $\Delta c$ can be easily determined using sample profiling results. If we are able to approximate the increase in $\alpha$ , $\Delta \rho$ and $\Delta n(\alpha, d)$ can also be determined because their relation to $d$ , $\alpha$ , and $c$ is deterministic. Therefore, to select a suitable draft model when a new prompt arrives, we need to approximate $\alpha$ and inspect whether condition 4 holds in order to determine whether to use a larger draft model.
145
+
146
+ Typically a prompt can be represented as a vector. We represent a prompt as a vector $\mathbf{u} \in \mathbb{R}^r$ with $r > 0$ being the vector length. We model our goal $\alpha_c^{\mathbf{u}}$ of prompt $\mathbf{u}$ for a certain ratio $c$ as
147
+
148
+ $$
149
+ \alpha_ {c} ^ {\mathbf {u}} = \mathbf {u} ^ {\top} \mathbf {Z} _ {c} + \epsilon_ {c} ^ {\mathbf {u}} \tag {5}
150
+ $$
151
+
152
+ where $\mathbf{Z}_c$ is the parameter vector to determine and the random noise variable $\epsilon_c^{\mathbf{u}}$ is independent of $\mathbf{Z}_c$ . For each $\mathbf{u} \in \mathbb{R}^r$ , the random variables $\{\epsilon_c^{\mathbf{u}}\}$ are identically distributed with $\mathbb{E}(\epsilon_c^{\mathbf{u}}) = 0$ for all $\mathbf{u}$ . The vector embedding is constructed as a concatenation of the prompt length, prompt perplexity and its TF-IDF score.
153
+
154
+ Algorithm. Based on the analysis, we devise the following algorithm to select draft model. Suppose there exist $r$ linearly independent prompts $\mathbf{b}_1,\dots ,\mathbf{b}_r\in \mathbb{R}^r$ . In the beginning, for each ratio $c$ and these $r$ prompts, the algorithm runs the speculative decoding and observes the single token accuracy $\alpha_{c}^{\mathbf{b}_{p}}$ and computes the ordinary least square estimate for $\mathbf{Z}_c$ , given by
155
+
156
+ $$
157
+ \widehat {\mathbf {Z}} _ {c} = \left(\sum_ {p = 1} ^ {r} \mathbf {b} _ {p} \mathbf {b} _ {p} ^ {\top}\right) ^ {- 1} \sum_ {p = 1} ^ {r} \mathbf {b} _ {p} \alpha^ {\mathbf {b} _ {p}}.
158
+ $$
159
+
160
+ For each newly arrived prompt $\mathbf{u}$ , it computes the estimated $\hat{\alpha}_c^{\mathbf{u}}$ for potential draft-target model pairs and check Equation 4 to select the optimal draft model. In an LLM server center setting that has many machines hosting many LLMs, the selection of draft models can be implemented by redirecting requests to the appropriate nodes in the center equipped with the desired draft model and target model pair.
161
+
162
+ # 6 Evaluation
163
+
164
+ In this section, we present and analyze the experimental results gathered from testing our proposed algorithms and hypotheses.
165
+
166
+ # 6.1 Experimental Setups
167
+
168
+ This part outlines the configurations and setups used to collect the performance data.
169
+
170
+ Datasets and Models. we used three datasets to evaluate model performance and benchmark various implementations. These datasets were selected to reflect common tasks found in chatbot settings and other LLM applications. We employed system prompts to guide the LLMs for higher-quality outputs, particularly for tasks
171
+
172
+ like coding and text summarization. See Appendix D.1 for more details. The datasets include OpenAI's HumanEval (Chen et al., 2021a) (CC BY 4.0) for coding tasks, XSum for extreme text summarization (Chen et al., 2021b) (Apache 2.0), GSM8K (Cobbe et al., 2021) (MIT License) for mathematical reasoning, and Alpaca (Taori et al., 2023) (CC BY-NC 4.0) for complex advice queries. We include llama-2-chat 70B (Meta's Llama 2 Community License), Meta OPT 13B (MIT License), BigScience BLOOM 7B (RAIL License), and Dolly 12B (Databricks Open License) for target models. More details about the models we benchmarked are in Appendix D.1. Each dataset was sampled with 25 prompts in online predictive model construction, and evaluated with all remaining prompts across various settings. Note that when using speculative decoding, the draft model and the target model should have been trained on the same datasets to achieve good prediction accuracies, which limits the possible combinations in our experiments.
173
+
174
+ Platform. Table 4 in Appendix D.1 details the GPUs used, including memory bandwidth, capacity, and supported data types. For LLaMA 70B-7B and 70B-13B pairs, we use two NVIDIA A100 GPUs with 80GB memory each, distributing the 70B model across both GPUs. For other model pairs, we conduct our study using a single GPU, loading both the target and draft models on the same device.
175
+
176
+ # 6.2 Performance
177
+
178
+ We list in Table 1 the throughput results of adaptive window size selection for different model pairs on different hardware. The results of the online window optimization method are reported. We have the following observations. First, our method achieves a $2.07 \times$ speedup over autoregressive decoding and a $7.69\%$ improvement over speculative baselines. Given that even a $1\%$ speedup can save millions in large-scale LLM deployments (AI, 2023; Cloud, 2023), this improvement underscores the substantial impact of our approach. Second, our method achieves different speedups when benchmarking on different datasets. For the HumanEval dataset, speculative decoding has the potential to significantly accelerate performance due to the structured nature of programming languages, which follow stricter grammar and syntax compared to natural language. Repetitive patterns, such as for loops or if-else statements, are easier for the
179
+
180
+ draft model to predict accurately. With adaptive speculation, the algorithm can adjust the parameter $\gamma$ dynamically to suit different sub-sequences. For instance, $\gamma$ can be increased for predictable loops, whereas for more complex or less frequent constructs like API calls or high-level programming, $\gamma$ can be reduced to improve the alignment between the draft and target models, minimizing token waste. Notably, conventional speculative decoding experiences a significant slowdown on the XSum dataset, highlighting a key limitation of speculative methods. In contrast, our approach dynamically adjusts the window size—sometimes reducing it to zero—effectively preventing slowdowns. As a result, we achieve a $70\%$ throughput improvement on XSum, even though it provides no speedup over default LLMs without speculative decoding. Third, the ratio of model size matters when it comes to model pairing. Larger ratios generally lead to higher speedups while smaller target-draft parameter ratios such as BLOOM 7B-1B1 leave less room for improvement.
181
+
182
+ Table 1: Evaluation of adaptive window size selection. SPS denotes the throughput improvement our method achieves over the original speculative decoding. ARS denotes improvements over the default LLMs without speculative decoding. ("-" for not-runnable cases due to memory limit)
183
+
184
+ <table><tr><td rowspan="2">Model Pairing</td><td rowspan="2">Dataset</td><td colspan="2">A100</td><td colspan="2">V100</td><td colspan="2">4090</td></tr><tr><td>SPS</td><td>ARS</td><td>SPS</td><td>ARS</td><td>SPS</td><td>ARS</td></tr><tr><td>LLaMA 70B/7B</td><td>finance-alpaca</td><td>6.43%</td><td>2.11×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B/13B</td><td>finance-alpaca</td><td>4.89%</td><td>1.90×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BLOOM 7B/560M</td><td>finance-alpaca</td><td>4.28%</td><td>1.05×</td><td>7.69%</td><td>1.15×</td><td>3.70%</td><td>1.22×</td></tr><tr><td>BLOOM 7B/1B1</td><td>finance-alpaca</td><td>4.36%</td><td>1.04×</td><td>3.20%</td><td>1.15×</td><td>3.29%</td><td>1.17×</td></tr><tr><td>OPT 13B/125M</td><td>finance-alpaca</td><td>4.82%</td><td>2.32×</td><td>3.41%</td><td>3.4×</td><td>-</td><td>-</td></tr><tr><td>Dolly 12B/3B</td><td>finance-alpaca</td><td>9.11%</td><td>1.03×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B/7B</td><td>humaneval</td><td>10.35%</td><td>2.41×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B/13B</td><td>humaneval</td><td>8.53%</td><td>2.23×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BLOOM 7B/560M</td><td>humaneval</td><td>8.14%</td><td>1.04×</td><td>2.51%</td><td>1.09×</td><td>3.09%</td><td>1.25×</td></tr><tr><td>BLOOM 7B/1B1</td><td>humaneval</td><td>4.03%</td><td>1.1×</td><td>3.57%</td><td>1.16×</td><td>3.51%</td><td>1.3×</td></tr><tr><td>OPT 13B/125M</td><td>humaneval</td><td>11.40%</td><td>2.29×</td><td>2.15%</td><td>3.34×</td><td>-</td><td>-</td></tr><tr><td>Dolly 12B/3B</td><td>humaneval</td><td>15.20%</td><td>1.07×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B/7B</td><td>gsm8k</td><td>7.13%</td><td>2.28×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B/13B</td><td>gsm8k</td><td>9.66%</td><td>2.08×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BLOOM 7B/560M</td><td>gsm8k</td><td>15.03%</td><td>1×</td><td>2.52%</td><td>1.01×</td><td>4.84%</td><td>1.18×</td></tr><tr><td>BLOOM 7B/1B1</td><td>gsm8k</td><td>10.70%</td><td>1×</td><td>0.77%</td><td>1.02×</td><td>1.97%</td><td>1.19×</td></tr><tr><td>OPT 13B/125M</td><td>gsm8k</td><td>5.95%</td><td>2.24×</td><td>10.52%</td><td>3.36×</td><td>-</td><td>-</td></tr><tr><td>Dolly 12B/3B</td><td>gsm8k</td><td>16.92%</td><td>1.06×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B/7B</td><td>xsum</td><td>2.94%</td><td>1.73×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B/13B</td><td>xsum</td><td>0.14%</td><td>1.5×</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BLOOM 7B/560M</td><td>xsum</td><td>77.50%</td><td>1×</td><td>49.30%</td><td>1×</td><td>54.63%</td><td>1×</td></tr><tr><td>BLOOM 7B/1B1</td><td>xsum</td><td>70.91%</td><td>1×</td><td>42.94%</td><td>1×</td><td>54.17%</td><td>1×</td></tr><tr><td>OPT 13B/125M</td><td>xsum</td><td>10.64%</td><td>1.02×</td><td>7.91%</td><td>2.43×</td><td>-</td><td>-</td></tr></table>
185
+
186
+ Next, we show the results of the draft model selection. This decision is made online for each prompt. Table 2 compares the speedups over the speculative decoding with and without draft model selection. For LLaMA 70B, the draft model currently includes LLaMA 7B and LLaM13B.For BLOOM 7B, the draft model includes BLOOM 560M, 1B1, and 1B7. The overall throughput speedups range from $3.55\%$ to $16.48\%$ using adap
187
+
188
+ tive draft model selection.
189
+
190
+ Table 2: Throughput performance improvement over speculative decoding.
191
+
192
+ <table><tr><td rowspan="2">Target Model</td><td colspan="3">finance-alpaca</td><td colspan="3">humaneval</td><td colspan="3">gsm8k</td></tr><tr><td>A100</td><td>V100</td><td>4090</td><td>A100</td><td>V100</td><td>4090</td><td>A100</td><td>V100</td><td>4090</td></tr><tr><td>LLaMA 70B (w/o draft selection)</td><td>6.43%</td><td>-</td><td>-</td><td>10.35%</td><td>-</td><td>-</td><td>9.66%</td><td>-</td><td>-</td></tr><tr><td>LLaMA 70B (w/draft selection)</td><td>6.46%</td><td>-</td><td>-</td><td>11.11%</td><td>-</td><td>-</td><td>9.66%</td><td>-</td><td>-</td></tr><tr><td>BLOOM 7B (w/o draft selection)</td><td>4.36%</td><td>7.69%</td><td>3.70%</td><td>8.14%</td><td>3.57%</td><td>3.51%</td><td>9.76%</td><td>2.52%</td><td>4.84%</td></tr><tr><td>BLOOM 7B (w draft selection)</td><td>4.94%</td><td>16.48%</td><td>8.15%</td><td>8.57%</td><td>4.96%</td><td>4.17%</td><td>9.76%</td><td>3.55%</td><td>6.83%</td></tr></table>
193
+
194
+ We compare our online adaptive window size selection with SpecDec++ (Huang et al., 2024) in Table 3. SpecDec++ uses a ResNet to determine whether to stop speculation during speculative sampling at the current word predicted from the draft model. It employs this method based on its prediction of whether the next draft token will be accepted. Training this ResNet model requires conducting offline profiling runs and collecting data on the hardware (for example, 500 hours on A100-80G GPUs for training dataset generation, 400 hours for training, and 500 hours for evaluation set). To ensure a fair comparison, we employ the same setup from its original paper, using LLaMA2-chat models (Touvron et al., 2023b). Specifically, we select the 7B version as the draft model and the 70B version as the target model for the A100 platform and BigScience BLOOM 560m version as the draft model and the 7B version as the target model for GTX 4090. To optimize memory usage, the models are implemented in the bfloat16 format. The tok/s speedups comparison is as follows on both the A100 and 4090 devices. We find that although our method uses no ahead-of-time training while SpecDec++ uses hundreds of GPU-hours to do that, our method outperforms SpecDec++ consistently, with an average of $5.7\%$ improvement in latency. Part of the time savings come from selecting the $\gamma$ value before each speculation instead of running a neural network each time the draft model produces a new token. Our approach further shows advancement by adaptively choosing $\gamma$ on the fly without arduous data collecting and training.
195
+
196
+ Table 3: Comparison of Tok/s speedups (v.s. autoregressive) and productivity of SpecDec++ and our method (without draft model selection).
197
+
198
+ <table><tr><td rowspan="2">Dataset</td><td colspan="2">A100 (LLaMA 70B/7B)</td><td colspan="2">4090 (BLOOM 7B/560m)</td></tr><tr><td>SpecDec++</td><td>Ours</td><td>SpecDec++</td><td>Ours</td></tr><tr><td>Alpaca</td><td>2.04×</td><td>2.11×</td><td>1.21×</td><td>1.26×</td></tr><tr><td>HumanEval</td><td>2.23×</td><td>2.41×</td><td>1.22×</td><td>1.23×</td></tr><tr><td>GSM8K</td><td>2.26×</td><td>2.28×</td><td>1.17×</td><td>1.18×</td></tr><tr><td>Profile &amp; Prepare</td><td>1000h</td><td rowspan="2">0</td><td>100h</td><td rowspan="2">0</td></tr><tr><td>Offline Training</td><td>400h</td><td>400h</td></tr></table>
199
+
200
+ ![](images/a83bff2393353f1379f79135c25b55cd5673d4a7b1085ce951c2970c0135ca7a.jpg)
201
+ Figure 3: Detailed experimental results of different adaptive methods.
202
+
203
+ # 6.3 Detailed Analysis
204
+
205
+ We compare the throughput and acceptance rate for different adaptive speculation methods in Figure 3. $\gamma$ denotes the speculation window size for the original speculative decoding method and upper-bound speculation (simply by skipping the validation process); we set a maximum $\gamma_{\mathrm{max}}$ value for adaptive speculation methods, ensuring that $\gamma$ will not exceed this value. All experiments are conducted on the A100 machine with OPT 13B-125M model pair. From the figure, we find that (i) the analytical model-guided online window size optimization method gives the best overall performance. (ii) Even though RL-based speculation gives better acceptance rates than the other methods, it shows lower throughput. This is because a higher acceptance rate is not directly linked to a higher throughput as in Equation 3. In our case, RL-based speculation remains at a low $\gamma$ value to keep the acceptance rate high while also losing the potential for more speedups. (iii) cache-based and state-based speculation perform better when prompts are longer (e.g., the humaneval dataset). This can be attributed to a more stable $\gamma$ prediction as more information is involved in the long prompt.
206
+
207
+ # 6.4 Results for Scalability
208
+
209
+ Comprehensive Chat Dataset. We include evaluations for a comprehensive chat dataset ShareGPT (Community, 2023) in Appendix D.3. Results show that our method achieves an average of $1.71 \times$ speedups compared to original autoregressive decoding, and an additional $4.9\%$ improve
210
+
211
+ ment over speculative decoding baselines.
212
+
213
+ Adaptive Speculation for Tree-based Decoding Method. Current speculative decoding uses tree-based methods (Cai et al., 2024; Li et al.). On-the-fly adaptation of speculative decoding is complementary to the tree-based decoding. By adaptively changing the draft tree depth, our drop-in method optimizes the draft token sequence length in real time, enhancing decoding performance. We apply our method to the state-of-the-art EAGLE-2 (Li et al., 2024) and report the results in Appendix D.5. On the MT-Bench (Zheng et al., 2023), we achieve up to $3.56 \times$ speedups compared to original autoregressive decoding, and an additional $4.2\%$ improvement over SOTA.
214
+
215
+ # 7 Conclusion
216
+
217
+ In this paper, we propose on-the-fly adaptation for speculative decoding to accelerate LLM inferences. As a pure software approach, it introduces a two-level adaptation for draft model adaptation and online window size adaptation with no ahead-of-time profiling or training, providing a drop-in optimization for existing LLMs. We experimentally demonstrate the effectiveness of this method and show $3.55\%$ to $16.48\%$ speedups compared to the speculative decoding, and $1.2 \times$ to $3.4 \times$ over the default LLMs without speculative decoding. Among the several online adaptive methods, we found that the token accuracy-based online window size optimization method works the best, consistently outperforming other methods in terms of the overall LLM throughput.
218
+
219
+ # 8 Limitation
220
+
221
+ This section discusses the limitations of the current work. The drop-in nature of our solution assumes compatibility with existing inference pipelines, but integration challenges may arise in specialized LLM deployments, such as those using custom hardware accelerators or distributed inference systems. Future work should explore more adaptive and model-agnostic strategies to further enhance the robustness and applicability of our approach.
222
+
223
+ # 9 Acknowledgement
224
+
225
+ This material is based upon work supported by the National Science Foundation (NSF) under Grant No. CNS-2312207 and the National Institute of Health (NIH) under Grant No. 1R01HD108473-01. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF or NIH.
226
+
227
+ # References
228
+
229
+ Google AI. 2023. Llm inference api - google ai mediapipe solutions. Accessed: 2024-09-15.
230
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
231
+ Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple llm inference acceleration framework with multiple decoding heads. arXiv preprint arXiv:2401.10774.
232
+ Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318.
233
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
234
+
235
+ Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021b. Dialogsum: A real-life scenario dialogue summarization dataset. arXiv preprint arXiv:2105.06762.
236
+ Google Cloud. 2023. Accelerating ai inference with google cloud tpus and gpus. Accessed: 2024-09-15.
237
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
238
+ ShareGPT Community. 2023. Sharegpt: A platform for sharing gpt conversations. https://sharegpt.com/. Accessed: 2024-11-16.
239
+ Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introducing the world's first truly open instruction-tuned llm. Company Blog of Databricks.
240
+ Yichao Fu, Peter Bailis, Ion Stoica, and Hao Zhang. 2024. Break the sequential dependency of llm inference using lookahead decoding. arXiv preprint arXiv:2402.02057.
241
+ John L. Hennessy and David A. Patterson. 2017. Computer Architecture, Sixth Edition: A Quantitative Approach, 6th edition. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
242
+ Geoffrey Hinton. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
243
+ Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Hasan Genc, Kurt Keutzer, Amir Gholami, and Sophia Shao. 2023. Speed: Speculative pipelined execution for efficient decoding. arXiv preprint arXiv:2310.12072.
244
+ Kaixuan Huang, Xudong Guo, and Mengdi Wang. 2024. Specdec++: Boosting speculative decoding via adaptive candidate lengths. arXiv preprint arXiv:2405.19715.
245
+ Daniel A Jiménez and Calvin Lin. 2001. Dynamic branch prediction with perceptrons. In Proceedings HPCA Seventh International Symposium on High-Performance Computer Architecture, pages 197-206. IEEE.
246
+ Chih-Chieh Lee, I-CK Chen, and Trevor N Mudge. 1997. The bi-mode branch predictor. In Proceedings of 30th Annual International Symposium on Microarchitecture, pages 4-13. IEEE.
247
+ Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pages 19274-19286. PMLR.
248
+ Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle: Speculative sampling requires rethinking feature uncertainty. In *Forty-first International Conference on Machine Learning*.
249
+
250
+ Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. 2024. Eagle: Speculative sampling requires rethinking feature uncertainty. arXiv preprint arXiv:2401.15077.
251
+
252
+ Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Ion Stoica, Zhijie Deng, Alvin Cheung, and Hao Zhang. 2023. Online speculative decoding. arXiv preprint arXiv:2310.07177.
253
+
254
+ Xupeng Miao, G Oliaro, Z Zhang, X Cheng, Z Wang, RYY Wong, A Zhu, L Yang, X Shi, C Shi, et al. 2023. Specinfer: Accelerating generative large language model serving with speculative inference and token tree verification. arXiv preprint arXiv:2305.09781.
255
+
256
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc.
257
+
258
+ Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5149-5152. IEEE.
259
+
260
+ Rico Sennrich. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
261
+
262
+ James E Smith. 1998. A study of branch prediction strategies. In 25 years of the international symposia on Computer architecture (selected papers), pages 202-215.
263
+
264
+ Benjamin Spector and Chris Re. 2023. Accelerating lIm inference with staged speculative decoding. arXiv preprint arXiv:2308.04623.
265
+
266
+ Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instruction-following llama model.
267
+
268
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
269
+
270
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
271
+
272
+ Minghao Yan, Saurabh Agarwal, and Shivaram Venkataraman. 2024. Decoding speculative decoding. arXiv preprint arXiv:2402.01528.
273
+
274
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36:46595-46623.
275
+
276
+ # A Proof
277
+
278
+ # A.1 Proof of Theorem 1.
279
+
280
+ Proof. Since speculative decoding terminates upon the first failure in a verification window, the number of accepted tokens $V(\gamma, X_{<t})$ follows a truncated geometric distribution:
281
+
282
+ $$
283
+ P (V = k) = (1 - p) ^ {k} p, \quad k \in \{0, 1, \dots , \gamma - 1 \}.
284
+ $$
285
+
286
+ Thus, for a fixed window size $\gamma$ , the failure probability at each step is:
287
+
288
+ $$
289
+ q = 1 - (1 - p) ^ {\gamma}.
290
+ $$
291
+
292
+ The number of failures $F$ in $N$ verification steps follows a binomial distribution:
293
+
294
+ $$
295
+ F \sim \operatorname {B i n o m i a l} (N, q).
296
+ $$
297
+
298
+ For small $p$ , we approximate:
299
+
300
+ $$
301
+ q \approx \gamma p.
302
+ $$
303
+
304
+ By the Poisson limit theorem, for large $N$ , the failure count can be approximated by:
305
+
306
+ $$
307
+ F \sim \operatorname {P o i s s o n} (N \gamma p).
308
+ $$
309
+
310
+ Now, both adaptive and fixed methods estimate $p$ using the maximum likelihood estimator:
311
+
312
+ $$
313
+ \widehat {p} = \frac {F}{S + F}.
314
+ $$
315
+
316
+ Applying the delta method, we approximate the variance of $\widehat{p}$ :
317
+
318
+ $$
319
+ \operatorname {V a r} (\widehat {p}) \approx \frac {p (1 - p)}{(S + F) ^ {2}}.
320
+ $$
321
+
322
+ In the fixed case, the total number of observed tokens is:
323
+
324
+ $$
325
+ S _ {\text {f i x e d}} + F _ {\text {f i x e d}} = N \gamma .
326
+ $$
327
+
328
+ For the adaptive method, where $\gamma (j)$ is adjusted dynamically, we have:
329
+
330
+ $$
331
+ S _ {\text {a d a p t i v e}} + F _ {\text {a d a p t i v e}} \geq N \gamma .
332
+ $$
333
+
334
+ Thus,
335
+
336
+ $$
337
+ \operatorname {V a r} \left(\widehat {p} _ {\text {a d a p t i v e}}\right) \leq \operatorname {V a r} \left(\widehat {p} _ {\text {f i x e d}}\right).
338
+ $$
339
+
340
+ Using Hoeffding's inequality,
341
+
342
+ $$
343
+ P \left(| \widehat {p} - p | \geq \epsilon\right) \leq 2 \exp \left(- 2 \epsilon^ {2} (S + F)\right),
344
+ $$
345
+
346
+ we conclude that the adaptive method has a tighter error bound:
347
+
348
+ $$
349
+ P \left(\left| \widehat {p} _ {\text {a d a p t i v e}} - p \right| \geq \epsilon\right) \leq P \left(\left| \widehat {p} _ {\text {f i x e d}} - p \right| \geq \epsilon\right).
350
+ $$
351
+
352
+ Thus, the expected absolute error is also smaller:
353
+
354
+ $$
355
+ \mathbb {E} \left[ | \widehat {p} _ {\text {a d a p t i v e}} - p | \right] \leq \mathbb {E} \left[ | \widehat {p} _ {\text {f i x e d}} - p | \right].
356
+ $$
357
+
358
+ This completes the proof.
359
+
360
+ ![](images/5cfda649786bdbbe64485ad8b48d33e47524d744d09eccb4e5edfd129a846d4a.jpg)
361
+
362
+ # A.2 Proof of Theorem 2.
363
+
364
+ Proof. Let $\{\gamma_q^i\}_{i=1}^n$ denote the history of the window sizes during the adaptive speculation and $d_q = \mathbb{E}_{i=1,\dots,n}(\gamma_q^i)$ be the average window size during speculation. In the following formulations, we omit $p$ and $q$ as the formulations are about a given $p$ and $q$ pair. The throughput $R$ is computed by dividing the length of the answer by the latency $t$ :
365
+
366
+ $$
367
+ R = \frac {L}{t}. \tag {6}
368
+ $$
369
+
370
+ The total latency of generating outputs for one prompt is computed as
371
+
372
+ $$
373
+ \begin{array}{l} t = \sum_ {i = 1} ^ {n} a \gamma^ {i} + b (\gamma) = b (\gamma) n + a \sum_ {i = 1} ^ {n} \gamma^ {i} \\ = n (b (\gamma) + a \cdot \mathbb {E} \left(\gamma^ {i}\right)). \tag {7} \\ \end{array}
374
+ $$
375
+
376
+ Inspecting the relations among $d, n, \rho$ and $L$ gives us
377
+
378
+ $$
379
+ L = d \cdot n \cdot \rho . \tag {8}
380
+ $$
381
+
382
+ Solving for Equations 6, 7 and 8 gives us the expression for throughput.
383
+
384
+ # B Method Details
385
+
386
+ # B.1 Formulation of Objective 1
387
+
388
+ This section discusses details on formulating objective 1.
389
+
390
+ Expected Accepted Token Length. Given the single token accuracy $\beta = Acc(x_{t}|X_{< t})\in [0,1]$
391
+
392
+ the expected accepted number of tokens is computed as:
393
+
394
+ $$
395
+ \begin{array}{l} \mathbb {E} (\# \text {o f a c c e p t e d t o k e n s} | X _ {< t}) \\ = 1 + \sum_ {i = 1} ^ {\gamma - 1} i \beta^ {i} (1 - \beta) + \gamma \beta^ {\gamma} \\ = 1 + \sum_ {i = 1} ^ {\gamma - 1} i \beta^ {i} - \sum_ {i = 2} ^ {\gamma} (i - 1) \beta^ {i} + \gamma \beta^ {\gamma} \tag {9} \\ = \sum_ {i = 0} ^ {\gamma} \beta^ {i} \\ = \frac {1 - \beta^ {\gamma + 1}}{1 - \beta}. \\ \end{array}
396
+ $$
397
+
398
+ Formulation of objective. The expected number of verified tokens as correct in a $\gamma$ -long speculation window is $\frac{1 - Acc(x_t|X_{<t})^{\gamma + 1}}{1 - Acc(x_t|X_{<t})}$ . The total latency of one speculation step and verification step is calculated as $\gamma a_q + b_p$ . Therefore, the expected number of tokens verified as correct per unit time given a window size $\gamma$ is
399
+
400
+ $$
401
+ \frac {1 - A c c (x _ {t} | X _ {< t}) ^ {\gamma + 1}}{(1 - A c c (x _ {t} | X _ {< t})) (\gamma a _ {q} + b _ {p})}.
402
+ $$
403
+
404
+ # B.2 Estimation of $Acc(x_{t}|X_{< t})$
405
+
406
+ Let $\beta = Acc(x_{t}|X_{< t})$ . Let $Y$ be a random variable of the number of accepted tokens truncated at $\gamma +1$ . The probability function of $Y$ is
407
+
408
+ $$
409
+ f (y) = \left\{ \begin{array}{l l} \frac {(1 - \beta) \beta^ {y - 1}}{1 - \beta^ {\gamma + 1}}, & y = 1, 2, 3, \dots , \gamma + 1 \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {10}
410
+ $$
411
+
412
+ Maximum Likelihood Estimation. For a random sample of size $n$ , the likelihood function is
413
+
414
+ $$
415
+ L = (1 - \beta^ {\gamma + 1}) ^ {- n} (1 - \beta) ^ {n} \beta^ {\sum_ {i = 1} ^ {n} y - n}.
416
+ $$
417
+
418
+ The following equation 11, a $(\gamma +2)$ th-degree polynomial in $\hat{\beta}$ , provides the maximum likelihood estimator for $\beta$ .
419
+
420
+ $$
421
+ \begin{array}{l} \left(\sum_ {i = 1} ^ {n} y _ {i} - n (\gamma + 2) + n\right) \hat {\beta} ^ {\gamma + 2} + \left(n (\gamma + 2) - \sum_ {i = 1} ^ {n} y _ {i}\right) \hat {\beta} ^ {\gamma + 1} \\ - \left(\sum_ {i = 1} ^ {n} y _ {i}\right) \hat {\beta} + \sum_ {i = 1} ^ {n} y _ {i} - n = 0 \tag {11} \\ \end{array}
422
+ $$
423
+
424
+ Given values of $\gamma$ , $n$ , and $\sum_{i=1}^{n} y_i = n\overline{y}$ , one can compute the value of $\hat{\beta}$ using an iterative technique such as the Newton-Rhapson method to solve
425
+
426
+ equation 11. It can be shown that there is only one root in the range $0 < \hat{\beta} < 1$ .
427
+
428
+ To eliminate the need for an iterative solution to equation 11, we maintain a table to provide approximate solutions. From equation 11,
429
+
430
+ $$
431
+ \bar {y} = \left\{\gamma \hat {\beta} ^ {\gamma + 2} - (\gamma + 2) \hat {\beta} ^ {\gamma + 1} + 1 \right\} / \left(\hat {\beta} ^ {\gamma + 2} - \hat {\beta} ^ {\gamma + 1} - \hat {\beta} + 1\right). \tag {12}
432
+ $$
433
+
434
+ Further observation gives us that the rate of change of $\overline{y}$ concerning $\hat{\beta}$ appears to be sufficiently constant, making linear interpolation feasible and enabling our approximation in equation 2.
435
+
436
+ # B.3 Optimal Gamma
437
+
438
+ Given the single token accuracy and inference latency ratio of the draft model to the target model $c$ , the optimal $\gamma$ value to optimize objective 1 can be determined as in Figure 4.
439
+
440
+ ![](images/07b5e32449cf721b63849bb2b8001ce6f3739360e393b4a72500d42ab5ab5971.jpg)
441
+ Figure 4: The optimal $\gamma$ for different $\alpha$ and $c$ values.
442
+
443
+ # B.4 Psudo-code for Reinforcement learning-based speculation
444
+
445
+ Algorithm 2 detailed the reinforcement learning-based speculation.
446
+
447
+ # C Additional Method Information
448
+
449
+ # C.1 Examples for Easier and Harder sequences fro Draft Models
450
+
451
+ # Sub-sequences Easier for Smaller Draft Models
452
+
453
+ These sequences typically involve high-frequency n-grams or strong grammatical constraints.
454
+
455
+ # Common Phrases & Collocations:
456
+
457
+ - Context: "Thank you very..." $\rightarrow$ Draft model: "...much."
458
+ - Context: "Once upon a..." $\rightarrow$ Draft model: "...time."
459
+ - Context: "The cat sat on the..." → Draft model: "...mat."
460
+
461
+ Algorithm 2 Reinforcement Learning-Based Speculative
462
+
463
+ 1: function reinforcementLearningSpeculation $(M_p, M_q, \text{prefix}, \text{Agent})$
464
+ 2: $\triangleright$ Sample $y$ guesses $x_{1},\dots ,x_{y}$ from $M_q$ autoregressively.
465
+ 3: for $i = 1$ to $y$ do
466
+ 4: $q_{i}(x)\sim M_{q}(prefix + [x_{1},\dots ,x_{i - 1}])$
467
+ 5: $x_{i}\sim q_{i}(x)$
468
+ 6: $\triangleright$ Run $M_p$ in parallel.
469
+ 7: $(p_{1}(x),\dots ,p_{y + 1}(x))\gets$
470
+ 8: $M_{p}(prefix),\dots ,M_{p}(prefix + [x_{1},\dots ,x_{y}])$
471
+ 9: Determine the number of accepted guesses $n$ .
472
+ 10: $r_1 \sim U(0,1), \dots, r_y \sim U(0,1)$
473
+ 11: $n\gets \min (\{i - 1|1\leq i\leq y,r_i > \frac{p_i(x)}{q_i(x)}\} \cup \{y\})$
474
+ 12: Adjust the distribution from $M_{p}$ if needed.
475
+ 13: $p^{\prime}(x)\gets p_{n + 1}(x)$
476
+ 14: if $n < y$ then
477
+ 15: $p^{\prime}(x)\gets \mathcal{N}(\max (0,p_{n + 1}(x) - q_{n + 1}(x)))$
478
+ 16: action $\leftarrow$ GetAction(Agent, $y$ )
479
+ 17: $y =$ action
480
+ 18: Reward = the percentage of the accepted speculated tokens
481
+ 19: $\triangleright$ Return one token from $M_p$ and $n$ tokens from $M_q$ .
482
+ 20: $t\sim p^{\prime}(x)$
483
+ 21: return prefix + [x1, ...,xn,t]
484
+
485
+ # Syntactically Predictable Structures:
486
+
487
+ - Context: "She is going..." $\rightarrow$ Draft model: "...to" (infinitive marker)
488
+ - Context: "The quick brown fox jumps over the lazy..." $\rightarrow$ Draft model: "...dog." (common idiom)
489
+
490
+ # Sub-sequences Harder for Smaller Draft Models (Better for LLM)
491
+
492
+ These often involve specific entities, specialized vocabulary, complex relationships, or novel information.
493
+
494
+ # Unique/Domain-Specific Terminology:
495
+
496
+ - Context: "The patient was diagnosed with a rare form of..." $\rightarrow$ LLM: "...amyloidosis." (Specific medical term)
497
+ - Context: "The research paper discussed implications of quantum..." $\rightarrow$ LLM: "...entanglement for secure communication protocols." (Complex and specific)
498
+
499
+ # Complex Factual Information/Proper Nouns:
500
+
501
+ - Context: "The capital of Burkina Faso is...". $\rightarrow$ LLM: "...Ouagadougou."
502
+
503
+ # Nuanced/Figurative Language:
504
+
505
+ - Context: "Her argument, while passionate, was ultimately built on a foundation of..." $\rightarrow$ LLM: "...shifting sand." (Figurative language)
506
+
507
+ # Unexpected but Coherent Developments:
508
+
509
+ - Context: "Everyone expected the hero to save the day, but instead, he...” $\rightarrow$ LLM: "...revealed he had been the antagonist all along." (Surprising turn)
510
+
511
+ These examples highlight how a draft model handles predictable sequences for validation, reserving the LLM for generating more challenging, information-rich, or novel text.
512
+
513
+ # C.2 Workflow Example of Figure 1
514
+
515
+ The example provided at the bottom of Figure 1 illustrates this dynamic adaptation of $\gamma$ . The text below the main pipeline shows a sequence being generated, with the speculation window $\gamma$ changing at each step: The generation process starts with an initial $\gamma$ (e.g., $\gamma = 6$ for the tokens after "[START)").
516
+
517
+ 1. The draft model $M_q$ generates $\gamma$ tokens.
518
+ 2. These tokens are validated by $M_p$ . Tokens matching $M_p$ 's output are 'Accepted' (green), like 'japan's benchmark'.
519
+
520
+ - If a draft token mismatches $M_p$ 's output, it's 'Rejected' (red). For example, 'bond' is shown in red, signifying it was the first point of disagreement in that particular draft batch after three preceding tokens ('japan's benchmark') were accepted. The correct token from $M_p$ is used.
521
+ - A token like '69,' shown in blue is 'Resampled,' indicating it was provided or corrected by the target model, possibly based on a draft proposal that was neither a perfect match nor a complete mismatch requiring immediate termination of accepted tokens.
522
+ - Tokens like 'in late morning trading. [END]' are shown in grey, indicating they are 'Pending validation' in a subsequent step.
523
+
524
+ 3. Based on the outcome of this validation (e.g., 3 tokens accepted before the rejection at 'bond' when $\gamma$ was 6), the 'On-the-fly $\gamma$ Adaptation' module determines the next value for $\gamma$
525
+
526
+ (e.g., $\gamma = 5$ for the tokens following 'bond'). This cycle of speculation, validation, and $\gamma$ -adaptation repeats. As depicted by the values above the generated text, $\gamma$ changes throughout the generation of the sequence ( $6 \rightarrow 5 \rightarrow 4 \rightarrow 3 \rightarrow 4 \rightarrow 9 \rightarrow 3 \rightarrow 5$ ). This adaptive behavior, for instance, increasing $\gamma$ to 9 (e.g., before 'percent, to $10^{\circ}$ ) when acceptance rates are high or decreasing it (e.g., to 3 before '98 $5.9^{\circ}$ ) when mismatches are frequent, allows the system to optimize the speculation length according to the local characteristics and predictability of the sequence being generated.
527
+
528
+ # D Additional Experiments
529
+
530
+ This section includes additional experimental results.
531
+
532
+ # D.1 Additional Experimental setups
533
+
534
+ Software. We primarily use the HuggingFace Transformers library with PyTorch implementations. The flexibility of Python and the availability of pre-trained weights on HuggingFace allow us to experiment with various methods and conduct detailed analyses. The GPU implementations utilize NVIDIA's cuDNN library, which is optimized for large language models (LLMs) and transformers. To ensure the best performance and compatibility with the latest models, we use the most recent versions of Transformers (v4.38.2) and PyTorch (v2.2.1).
535
+
536
+ Hardware. LLMs demand significant GPU computing power and memory, particularly during inference, where memory bandwidth is critical in achieving high throughput on GPUs. Table 4 lists the GPUs we used, their memory bandwidth, capacity, and the datatypes employed.
537
+
538
+ Table 4: GPU Hardware
539
+
540
+ <table><tr><td>GPU</td><td>HBM (GB)</td><td>Mem Bandwidth (GB/s)</td><td>Datatype</td></tr><tr><td>NVIDIA V100</td><td>32</td><td>900</td><td>FP16</td></tr><tr><td>NVIDIA A100</td><td>80</td><td>1555</td><td>BF16</td></tr><tr><td>NVIDIA RTX4090</td><td>24</td><td>1008</td><td>BF16</td></tr></table>
541
+
542
+ We use two NVIDIA A100 GPUs with 80G memory for the LLaMA 70B-7B pair and 70B-13B pair. We distribute the 70B model across two GPUs, which leads to communication overhead during inference with LLaMA 70B. However, for speculative decoding, the 7B (13B) draft model is only loaded on a single GPU, reducing this overhead. For other model pairs, we limit our study to one GPU, loading both the target and draft models on a single device. This approach serves two
543
+
544
+ purposes. First, it allows us to explore the effects of resource constraints on a single GPU, which is relevant for future work on speculative decoding for personal devices. Second, it maximizes efficiency, as splitting a small LLM onto one GPU and a large LLM onto another would underutilize the resources; it is more effective to run both models on a single GPU.
545
+
546
+ Prompt Dataset. Table 5 consolidates information on the datasets, tasks, and additional details we used to benchmark and compare performance.
547
+
548
+ Table 5: Prompt Dataset
549
+
550
+ <table><tr><td>Dataset</td><td>Task</td><td>System Prompt</td></tr><tr><td>OpenAI HumanEval</td><td>Code completion</td><td>You are an expert programmer that helps to complete Python code. Given the code from the user, please complete the rest of the code to the best of your ability.</td></tr><tr><td>XSum</td><td>Summarization</td><td>You write two sentence summaries of new articles. Do not write any more. Keep it brief and to the point.</td></tr><tr><td>GSM8K</td><td>Math Word Problem</td><td>You are given a math question, and your task is to answer it. Then provide a step-by-step walkthrough on how you got the answer to the question.</td></tr><tr><td>Finance-Alpaca</td><td>Finance QA</td><td>You are a finance expert. Answer the following questions to the best of your knowledge, and explain as much as possible.</td></tr></table>
551
+
552
+ Models. When implementing speculative decoding, selecting appropriate model pairs presents challenges. The parameter ratio is crucial, as a low ratio can negate speed gains if the draft model isn't significantly faster than the target model. Additionally, both models must share the same tokenizer to avoid conversion overhead from differing tokenization schemes (Schuster and Nakajima, 2012; Sennrich, 2015). Speculative decoding is more effective with models trained on similar datasets, as seen with Meta's LLaMA models (Touvron et al., 2023b,a) or DeepMind's Chinchilla. Mixed precision (FP16 or BF16) is preferred, avoiding quantization due to slowdowns, and using deterministic decoding with a temperature of 0 for consistency (Hinton, 2015). Dolly is an open-source model from Databricks aimed at democratizing LLMs by offering open-source weights and the datasets needed for instruction fine-tuning (Conover et al., 2023). The following table 6 details the model pairs.
553
+
554
+ Table 6: Model Card
555
+
556
+ <table><tr><td>Target Model</td><td>Draft Model</td><td>Same Vendor?</td><td>Ratio</td></tr><tr><td>Meta LLaMA 70B</td><td>Meta LLaMA 13B</td><td>Yes</td><td>5.4x</td></tr><tr><td>Meta LLaMA 70B</td><td>Meta LLaMA 7B</td><td>Yes</td><td>10x</td></tr><tr><td>BigScience BLOOM 7B</td><td>BigScience BLOOM 560M</td><td>Yes</td><td>12.5x</td></tr><tr><td>BigScience BLOOM 7B</td><td>BigScience BLOOM 1.1B</td><td>Yes</td><td>7x</td></tr><tr><td>Meta OPT 13B</td><td>Meta OPT 125M</td><td>Yes</td><td>96.3x</td></tr><tr><td>DataBricks Dolly 12B</td><td>DataBricks Dolly 3B</td><td>Yes</td><td>4.0x</td></tr></table>
557
+
558
+ Implementation Details. The FSM-based method and cache-enabled FSM-based method are inspired by branch prediction in computer architec
559
+
560
+ ture (Lee et al., 1997; Smith, 1998; Jiménez and Lin, 2001). The reinforcement learning-based speculation involves online learning, so we conducted 25 warmup trials before recording benchmarks. To minimize overhead, the RL algorithm runs on the CPU rather than the GPU, ensuring both inference and training are completed in under 1 millisecond. This makes the overhead negligible when considering the end-to-end latencies compared to standard speculative decoding. AI assistants are used for refining the writing.
561
+
562
+ # D.2 Additional Experiment Results
563
+
564
+ We include more experiment results. Figure 5 and Figure 6 compare the throughput and acceptance rate for different adaptive speculation methods on the A100 machine with the BLOOM BigScience 7B-560M model pair and LLaMA 70B-7B.
565
+
566
+ We also experimented for Qwen2.5 7B/0.5 model pair on the 4090 machine. They show significant improvement over the baseline and speculative decoding as below, consistent with the observations on other models (see Figure 7).
567
+
568
+ Table 7: Performance of Qwen 7B/0.5B on different datasets
569
+
570
+ <table><tr><td>Model Pairing</td><td>Dataset</td><td>SPS</td><td>ARS</td></tr><tr><td>Qwen 7B/0.5B</td><td>Alpaca</td><td>3.67%</td><td>1.13×</td></tr><tr><td>Qwen 7B/0.5B</td><td>Humaneval</td><td>1.57%</td><td>1.57×</td></tr><tr><td>Qwen 7B/0.5B</td><td>GSM8K</td><td>3.08%</td><td>1.41×</td></tr></table>
571
+
572
+ # D.3 Comprehensive chat dataset
573
+
574
+ Table 8 shows the throughput results of adaptive window size selection for different model pairs on different hardware on the shareGPT dataset. The results of the online window optimization methods are reported. The experimental setups are the same as in Section 6.1.
575
+
576
+ # D.4 Non-greedy Decoding
577
+
578
+ We experimented for non-greedy scenarios where the temperature is set to one. The results are shown below in Table 9. Our method (ARS) still achieves significant improvements.
579
+
580
+ # D.5 Adaptive Speculation for Tree-based Decoding
581
+
582
+ We implemented our on-the-fly adaption of speculative decoding on top of EAGLE-2, dynamically adjusting the draft tree depth $(\gamma)$ during decoding. For different $\gamma$ , sequence lengths for different
583
+
584
+ ![](images/667245736c75916e294479e1e6c55c72f8d0b01e7e66a40c882a5a49a3febc34.jpg)
585
+ Figure 5: Detailed experimental results for BLOOM 7B-560M.
586
+
587
+ ![](images/051acb9b9f6c43f34789cb5929d1e66a4f1a551ef489dc24847850bcda1cb329.jpg)
588
+ Figure 6: Detailed experimental results for LLaMA 70B-7B.
589
+
590
+ branches of the draft tree are determined using the same expansion and rerank decision process as in the original EAGLE-2. Specifically, the tree depth dynamically changes for each speculation step; For a certain $\gamma$ in one speculation step, the algorithm first enters the expansion phase: At each layer of the tree, we select the top $k$ nodes with the highest probabilities and expand draft sequences based on these nodes. The longest draft sequence in the tree corresponds to the dynamically determined depth $\gamma$ . Once the expansion is complete up to the dynamically determined the $\gamma$ -th layer, we apply a rerank step to select the same number of tokens
591
+
592
+ from the draft tree as in EAGLE-2 and validate the corresponding draft sequences.
593
+
594
+ Table 10 shows the results of adaptive tree depth selection on EAGLE-2 for different model pairs on different hardware for MT-Bench. The experimental setups are the same as in Section 6.1. We achieve up to $3.56 \times$ speedups compared to original autoregressive decoding, and an additional $4.2\%$ improvement over EAGLE-2. We also achieve speedups of up to $4.25 \times$ , $3.75 \times$ , and $3.85 \times$ compared to original autoregressive decoding on the A100 machine for HumanEval, GSM8K, and Alpaca, respectively, with improvements of $4.27\%$ ,
595
+
596
+ Table 8: Evaluation for the comprehensive chat dataset. SPS denotes the throughput improvement our method achieves over the original speculative decoding. ARS denotes improvements over the default LLMs without speculative decoding.
597
+
598
+ <table><tr><td rowspan="2">Hardware</td><td rowspan="2">Model Pairing</td><td rowspan="2">Dataset</td><td colspan="2">Throughput</td></tr><tr><td>SPS</td><td>ARS</td></tr><tr><td rowspan="3">A100</td><td>LLaMA 70B/7B</td><td>shareGPT</td><td>7.89%</td><td>2.20×</td></tr><tr><td>LLaMA 70B/13B</td><td>shareGPT</td><td>3.69%</td><td>1.92×</td></tr><tr><td>OPT 13B/125M</td><td>shareGPT</td><td>4.81%</td><td>2.10×</td></tr><tr><td rowspan="2">4090</td><td>BLOOM 7B/560M</td><td>shareGPT</td><td>4.58%</td><td>1.18×</td></tr><tr><td>BLOOM 7B/1B1</td><td>shareGPT</td><td>3.50%</td><td>1.18×</td></tr></table>
599
+
600
+ Table 9: Comparison of platform and model pairing performance across different datasets.
601
+
602
+ <table><tr><td>Platform</td><td>Model Pairing</td><td>Dataset</td><td>SPS</td><td>ARS</td></tr><tr><td>4090</td><td>BLOOM 7B/560M</td><td>Alpaca</td><td>4.38%</td><td>1.20×</td></tr><tr><td>4090</td><td>BLOOM 7B/560M</td><td>Humaneval</td><td>9.83%</td><td>1.27×</td></tr><tr><td>4090</td><td>BLOOM 7B/560M</td><td>GSM8K</td><td>7.76%</td><td>1.14×</td></tr><tr><td>A100</td><td>LLaMA-2 70B/7B</td><td>Alpaca</td><td>2.77%</td><td>2.09×</td></tr><tr><td>A100</td><td>LLaMA-2 70B/7B</td><td>Humaneval</td><td>4.31%</td><td>2.38×</td></tr><tr><td>A100</td><td>LLaMA-2 70B/7B</td><td>GSM8K</td><td>2.38%</td><td>2.30×</td></tr></table>
603
+
604
+ 5.65%, and 3.83% over EAGLE-2. On the 4090 machine, for HumanEval, GSM8K, and Alpaca, we achieve speedups of up to $2.72 \times$ , $3.27 \times$ , and $2.52 \times$ compared to original autoregressive decoding, respectively, with improvements of 6.23%, 2.55%, and 2.92% over EAGLE-2.
605
+
606
+ Specifically, Table 10 shows the size of the draft model and the size ratio of the target-draft model. While larger ratios can theoretically lead to increased speedups, our experimental results reveal a more nuanced interaction. Specifically, as the ratio increases, having a fixed window size results in a reduced acceptance rate—as initially evidenced in the results for the original speculative EALGE-2 in Table 9 (0.62, 0.61, 0.51 for LLaMA2-Chat 7B, 13B, and 70B).
607
+
608
+ Our adaptive mechanism adjusts the window size, decreasing its value as the ratio becomes larger to maintain the acceptance rate. This adaptive reduction in window size (average value of 4.02 and 3.62 for 13B and 70B, respectively) explains why the speedup improvements decrease when scaling from 13B to 70B models. In summary, although higher parameter ratios offer potential for larger speedups, the necessary adjustments in gamma to maintain robust acceptance rates introduce a mixed effect.
609
+
610
+ Table 11 provides a detailed analysis of serving latency, speculation latency, verification latency,
611
+
612
+ and speculation accuracy. Speculation latency is measured as the number of tokens selected from the draft tree per second. Our method shows lower speculation latency compared to EAGLE-2. This is because, while we dynamically adapt the tree depth, we keep the number of tokens selected from the draft tree the same as in EAGLE-2. However, with a larger tree depth, more tokens might be sampled due to the increased number of layers. Verification latency is similar for both EAGLE-2 and our method, as they utilize the same target model. Notably, our method improves the acceptance rate by dynamically adjusting the tree depth, which effectively changes the speculation window size.
613
+
614
+ # D.6 Sensitivity Study
615
+
616
+ Effects of different history length. Table 12 shows a sensitivity study for the effects of different history lengths when adjusting the window size. The results are collected on the A100 machine for the BLOOM 7B-560M pair.
617
+
618
+ Effects of vector length. Table shows the sensitivity study for the effects of different vector dimensions for model selection. The results are collected on the 4090 machine for the BLOOM 7B-560M pair.
619
+
620
+ Table 10: Evaluation for adaptive speculation in improving EAGLE-2, a method for tree-based speculative decoding. SPS denotes the throughput improvement our method achieves over EAGLE-2. ARS denotes improvements over the default LLMs without speculative decoding. ("-": model is out of memory)
621
+
622
+ <table><tr><td rowspan="2">Target Model</td><td rowspan="2">Draft Model</td><td rowspan="2">Ratio</td><td rowspan="2">Dataset</td><td colspan="2">A100</td><td colspan="2">4090</td></tr><tr><td>SPS</td><td>ARS</td><td>SPS</td><td>ARS</td></tr><tr><td>Vicuna-7B-v1.3</td><td>EAGLE-Vicuna-0.24B</td><td>29×</td><td>MTBench</td><td>7.07%</td><td>3.21×</td><td>6.22%</td><td>2.28×</td></tr><tr><td>LLaMA2-Chat 7B</td><td>EAGLE-LLaMA2-Chat-0.24B</td><td>29×</td><td>MTBench</td><td>3.37%</td><td>3.29×</td><td>6.23%</td><td>2.72×</td></tr><tr><td>LLaMA2-Chat 13B</td><td>EAGLE-LLaMA2-Chat-0.37B</td><td>35×</td><td>MTBench</td><td>2.55%</td><td>4.01×</td><td>-</td><td>-</td></tr><tr><td>LLaMA2-Chat 70B</td><td>EAGLE-LLaMA2-Chat-0.99B</td><td>71×</td><td>MTBench</td><td>1.46%</td><td>3.56×</td><td>-</td><td>-</td></tr><tr><td>LLaMA3-Inst 70B</td><td>EAGLE-LLaMA3-Inst-0.99B</td><td>71×</td><td>MTBench</td><td>1.14%</td><td>2.68×</td><td>-</td><td>-</td></tr></table>
623
+
624
+ Table 11: Detailed analysis for adaptive speculation in improving EAGLE-2. Data are collected on the MT-Bench. "Speculation" and "Verification" denote speculation throughput and verification throughput, respectively. (Unit for throughput: Tokens/sec)
625
+
626
+ <table><tr><td>Hardware</td><td>Target Model</td><td>Method</td><td>Serving</td><td>Speculation</td><td>Verification</td><td>Acceptance Rate</td></tr><tr><td rowspan="10">A100</td><td rowspan="2">Vicuna-7B-v1.3</td><td>EAGLE-2</td><td>82.44</td><td>472.58</td><td>708.31</td><td>0.67</td></tr><tr><td>Ours</td><td>85.27</td><td>446.71</td><td>666.01</td><td>0.72</td></tr><tr><td rowspan="2">LLaMA2-Chat 7B</td><td>EAGLE-2</td><td>97.81</td><td>569.05</td><td>4491.42</td><td>0.62</td></tr><tr><td>Ours</td><td>100.41</td><td>299.85</td><td>4591.73</td><td>0.66</td></tr><tr><td rowspan="2">LLaMA2-Chat 13B</td><td>EAGLE-2</td><td>79.74</td><td>558.02</td><td>4535.35</td><td>0.61</td></tr><tr><td>Ours</td><td>81.51</td><td>491.37</td><td>4530.73</td><td>0.62</td></tr><tr><td rowspan="2">LLaMA2-Chat 70B</td><td>EAGLE-2</td><td>27.50</td><td>389.38</td><td>4532.27</td><td>0.51</td></tr><tr><td>Ours</td><td>27.90</td><td>192.02</td><td>4491.19</td><td>0.65</td></tr><tr><td rowspan="2">LLaMA3-Inst 70B</td><td>EAGLE-2</td><td>24.33</td><td>266.33</td><td>3392.65</td><td>0.53</td></tr><tr><td>Ours</td><td>24.61</td><td>132.08</td><td>3300.60</td><td>0.65</td></tr><tr><td rowspan="4">4090</td><td rowspan="2">Vicuna-7B-v1.3</td><td>EAGLE-2</td><td>117.95</td><td>665.97</td><td>1041.83</td><td>0.56</td></tr><tr><td>Ours</td><td>125.28</td><td>579.34</td><td>1164.03</td><td>0.56</td></tr><tr><td rowspan="2">LLaMA2-Chat 7B</td><td>EAGLE-2</td><td>142.15</td><td>712.72</td><td>8278.28</td><td>0.67</td></tr><tr><td>Ours</td><td>151.00</td><td>643.91</td><td>8137.47</td><td>0.72</td></tr></table>
627
+
628
+ Table 12: Sensitivity study for different history length values when adjusting window size. The best throughput is highlighted for each $\gamma_{\mathrm{max}}$ .
629
+
630
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">History Length</td><td colspan="4">γmax</td></tr><tr><td>5</td><td>6</td><td>7</td><td>8</td></tr><tr><td rowspan="3">Alpaca</td><td>5</td><td>52.28</td><td>52.49</td><td>52.05</td><td>52.37</td></tr><tr><td>6</td><td>54.18</td><td>53.46</td><td>52.71</td><td>53.00</td></tr><tr><td>7</td><td>53.03</td><td>53.01</td><td>54.32</td><td>53.36</td></tr><tr><td rowspan="3">Humaneval</td><td>5</td><td>93.98</td><td>94.48</td><td>94.21</td><td>94.21</td></tr><tr><td>6</td><td>94.84</td><td>95.18</td><td>93.39</td><td>93.39</td></tr><tr><td>7</td><td>94.54</td><td>94.30</td><td>93.41</td><td>93.41</td></tr><tr><td rowspan="3">gsm8k</td><td>5</td><td>62.69</td><td>62.15</td><td>63.42</td><td>64.03</td></tr><tr><td>6</td><td>61.48</td><td>61.78</td><td>63.72</td><td>61.84</td></tr><tr><td>7</td><td>64.77</td><td>61.38</td><td>62.74</td><td>64.27</td></tr></table>
631
+
632
+ Table 13: Sensitivity study for different dimensions for model selection.
633
+
634
+ <table><tr><td>Dimension</td><td>4</td><td>8</td><td>10</td><td>12</td><td>16</td></tr><tr><td>Throughput</td><td>74.29</td><td>75.23</td><td>74.00</td><td>75.46</td><td>75.55</td></tr></table>
2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edb7260683944b2f64c32fc1938dc1dd28824319c0a28c368a4f8780687d8ec9
3
+ size 919578
2025/A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/b14c9fdc-56df-4bf3-b892-7ce85722bb2a_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/b14c9fdc-56df-4bf3-b892-7ce85722bb2a_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/b14c9fdc-56df-4bf3-b892-7ce85722bb2a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96a59d7e8ea0fc020a97145f4c29723d428a7e029555ae6e50369fa0da385d71
3
+ size 784254
2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/full.md ADDED
@@ -0,0 +1,573 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Dual-Mind Framework for Strategic and Expressive Negotiation Agent
2
+
3
+ Yutong Liu $^{1}$ , Lida Shi $^{2}$ , Rui Song $^{3}$ , Hao Xu $^{3*}$
4
+
5
+ <sup>1</sup>College of Software, Jilin University, Changchun, China
6
+
7
+ $^{2}$ School of Artificial Intelligence, Jilin University, Changchun, China
8
+
9
+ $^{3}$ College of Computer Science and Technology, Jilin University, Changchun, China
10
+
11
+ {yutong23,shild21}@mails.jlu.edu.cn
12
+
13
+ {songrui,xuhao}@jlu.edu.cn
14
+
15
+ # Abstract
16
+
17
+ Negotiation agents need to influence the attitudes or intentions of users to reach a consensus. Strategy planning and expressive optimization are crucial aspects of effective negotiations. However, previous studies have typically focused on only one of these aspects, neglecting the fact that their combined synergistic effect can lead to better performance. Inspired by the dual-process theory in human cognition, we propose a Dual-Mind Negotiation Agent (DMNA) framework. This framework integrates an intuitive module for rapid, experience-based response and a deliberative module for slow, expression optimization. The intuitive module is trained using Monte Carlo Tree Search (MCTS) and Direct Preference Optimization (DPO), enabling it to make suitable strategic planning and expression. The deliberative module employs a multifaceted reflexion mechanism to enhance the quality of expression. Experiments conducted on negotiation datasets confirm that DMNA achieves state-of-the-art results, demonstrating an enhancement in the negotiation ability of agents<sup>1</sup>.
18
+
19
+ # 1 Introduction
20
+
21
+ Negotiation dialogue involves psychology, politics, and communication, playing a crucial role in daily life (Deng et al., 2024a). Characterized as a mixed-initiative dialogue system, negotiation dialogue reflects real-world situations where users might be unwilling to cooperate with the system to reach goals. It aims the agents to mediate conflicts and facilitate mutual agreement among individuals (Zhan et al., 2024). To achieve it, the agents need to possess negotiation capabilities, such as strategic planning and expression skills (Deng et al., 2023a).
22
+
23
+ Recently, Large Language Models (LLMs) possess the remarkable ability to generate highly con
24
+
25
+ # Conversation History
26
+
27
+ Hi, this laptop has high-end specs and is priced at $1000.
28
+ Hi, I'm really interested in this laptop. Could you lower the price a bit?.
29
+ : Sorry, this is already a great deal. I have to stick to my original price.
30
+
31
+ # Planning-based Agent
32
+
33
+ : [Propose a counter price]The most I can offer is $800. That's my final offer. (Proactive, Goal-oriented) More Aggressive
34
+
35
+ # Expression-based Agent
36
+
37
+ I understand your pricing, but I really am limited by my budget. Could you think about it again? (Expressive) Less Proactive
38
+
39
+ # DMNA
40
+
41
+ : [Propose a counter price] I get that this laptop has high specs, but with my budget in mind, I'm willing to offer $800. What do you think about this price? (Proactive, Goal-oriented, Expressive)
42
+
43
+ Figure 1: Examples of the response of planning-based Agent, expression-based Agent, and DMNA respectively.
44
+
45
+ vincing content that can rival, even surpass human-crafted negotiation, which significantly empowers negotiation agents. Existing approaches can be broadly divided into two categories: planning-based and expression-based methods. Planning-based methods (e.g., GDP-ZERO (Yu et al., 2023), TRIP (Zhang et al., 2024), and DPDP (He et al., 2024)) operate by selecting suitable dialogue strategy from the predefined set based on current conversational state. These methods treat strategy planning as a decision-making problem and achieve efficiency through elaborate algorithms. However, their reliance on fixed strategies limits adaptability in real negotiations. Moreover, the quality of response is only dependent on the language backbone, neglecting fine-grained expressive constraints. In contrast, expression-based methods (e.g., AnE (Zhang et al., 2023), and ICL-AIF (Fu et al., 2023)) focus on the expression optimizing in negotiation. These methods excel in user perception and adaptive responses, but they lack quantifiable mechanisms for achieving goals, resulting in
46
+
47
+ suboptimal outcomes.
48
+
49
+ As illustrated in Figure 1, it presents a common negotiation scenario where a buyer expresses interest in a laptop and wants a price reduction. But the seller insists on original price. In this situation, three types of agents offer their negotiation response. The planning-based agent employs a 'propose a counter price' strategy, proposing a lower price and stating it as the final offer. This response proactively guides the conversation and is goal-oriented, aiming to purchase the laptop at the lowest price. However, there are evident shortcomings, such as the aggressive expression potentially leading to the failure of the negotiation. Conversely, the expression-based agent adopts a more empathetic and persuasive expression. It acknowledges the price of the seller and cites budget limitation as the reason for price reduction, and politely asks the seller to reconsider. This response is expressive and considerate, which can build rapport and understanding with the seller. However, it sacrifices some proactivity in negotiation, which might lead to a suboptimal outcome. Outstanding negotiation agents require both capabilities synergistically: strategic planning and expression skills, such as the response of DMNA in Figure 1. Our work addresses this integration gap by developing a unified framework that combines strategic planning with expressive optimization.
50
+
51
+ Inspired by the dual-process theory (Kahneman, 2003a), we propose a Dual-Mind Negotiation Agent. This agent uses a response model trained with strategy and expression experiences as the intuitive module and employs a multifaceted reflexion mechanism for expression as the deliberative module. During the negotiation process, the two cognitive systems are connected and interact with each other, achieving both strategy planning and expression optimization. Specifically, the intuitive module is trained on past negotiation experiences, which involve negotiation strategies and expression. We employ the MCTS process to sample strategies and expressions from negotiation dialogues and form preference data pairs. These data pairs are used to train the small model (e.g., LLaMA-8B (Touvron et al., 2023)) with Direct Preference Optimization (DPO)(Rafailov et al., 2023), equipping it with basic intuitive strategic and expressive capabilities across general dialogue scenarios. The deliberative module utilizes reflection based on multi-critics and moderator, ensuring high-quality
52
+
53
+ expression even in complex and unfamiliar dialogue states. The two modules are linked and interact through memory storage, influencing each other and working together to enhance the negotiation performance. In summary, our key contributions are concluded below:
54
+
55
+ - We develop a novel framework DMNA that combines strategic planning with expression optimization, enabling existing planning-based methods and expression-based methods to complement each other.
56
+ - We utilize MCTS to obtain preference data and fine-tune the model with DPO, endowing it with planning capability. We also propose a multifaceted reflexion mechanism that is more suited to negotiation than reflexion.
57
+ - Experimental results on two datasets suggest that DMNA outperforms planning-based and expression-based baselines, and it effectively enhances the negotiation ability of agents.
58
+
59
+ # 2 Related Work
60
+
61
+ # 2.1 Dialogue Policy Planning in Negotiation
62
+
63
+ To achieve a successful negotiation, some studies have focused on the planning of negotiation dialogue strategies. Early methods mostly used neural-focused, algorithm-focused, and reinforcement learning approaches, but these rely on annotated data and depend on the elaborate algorithm design (Zhang et al., 2022; Cheng et al., 2022; Gao et al., 2021).
64
+
65
+ LLMs present both opportunities and challenges for dialogue planning. For example, Deng et al. (2023a) adopts the prompts to select the strategy proactively, but nontrainable parameters limit effectiveness. Yu et al. (2023) integrates Monte Carlo Tree Search (MCTS) with LLMs prompts to optimize strategy selection, achieving promising performance. However, this method suffers from inefficiency and high costs. Additionally, there is a paradigm that employs a trainable policy model as plugins to assist LLMs, such as Yu et al. (2023) and Zhang et al. (2024). While these can reduce costs, they still fall short in simulating the cognitive processes of future dialogue behaviors. He et al. (2024) is a dual system based on Yu et al. (2023) and Deng et al. (2024b), which can mitigate the limitations of both approaches. Nevertheless, in practical negotiation scenarios, relying solely on
66
+
67
+ ![](images/6980e17a08b487add710aeabed667eff8501e5297a40b7ea6f25b45f1188424a.jpg)
68
+ Figure 2: The overview of DMNA. This method includes an intuitive module and a deliberate module. The intuitive module consists of an experience-based module that provides intuitive responses by preference learning based on strategy-expression pairs. The deliberate module is composed of multifaceted reflexion that adjusts the suboptimal expression from the intuitive module.
69
+
70
+ strategy planning is not sufficient. It is also crucial to have the ability to finely perceive complex situations and generate appropriate responses.
71
+
72
+ # 2.2 Expression Quality in Negotiation
73
+
74
+ Existing studies emphasize the significance of negotiation expression. Recent work employs reinforcement learning to enhance specific dimensions: politeness (Mishra et al., 2022), empathy (Samad et al., 2022), and linguistic richness through human-guided demonstrations (Shi et al., 2021).
75
+
76
+ The aforementioned methods enhance expression quality from certain fine-grained aspects. In some recent research, LLMs are utilized to improve the expression quality of negotiation dialogues. Fu et al. (2023) uses self-play simulations to iteratively refine negotiation expression based on feedback from other LLMs regarding the current dialogue state. And Zhang et al. (2023) enhances the persuasiveness of responses by prompting another LLM as an expert to act for verbal suggestion. However, these methods lack strategic guidance in the dialogue process, which results in insufficient look-forward capability of the negotiation agents.
77
+
78
+ # 2.3 Dual-Mind of LLMs
79
+
80
+ Dual-process thinking is a cognitive pattern unique to humans, characterized by both intuitive and rational processes (Kahneman, 2003b; Bengio, 2019). Inspired by the theory of dual-process thinking, many fields have applied its principles to the workflow of LLMs (Yang et al., 2024; Xiao et al., 2024; Lin et al., 2023). For example, Cheng et al. (2025)
81
+
82
+ propose HaluSearch, which treats text generation as a stepwise reasoning process (fast system) and tree search algorithms (slow system) effectively mitigate hallucinations during inference. In dialogue planning, He et al. (2024) incorporates a policy LM model for fast responses and a Monte Carlo Tree Search (MCTS) mechanism for slow planning. They also use a two-stage training regimen to enhance planning in proactive dialogue together. Similarly, Tian et al. (2023) introduces DUMA, which embodies a dual mind mechanism through two LLMs for fast and slow thinking. This allows seamless transitions between intuitive responses and deliberate problem-solving in conversational scenarios. In our work, we apply the theory of dual-process to the construction of a negotiation agent. During the negotiation process, we aim to enhance negotiation capabilities by leveraging two types of cognitive patterns.
83
+
84
+ # 3 Dual-Mind Negotiation Agent
85
+
86
+ The proposed DMNA framework is shown in Figure 2. It includes two modules that mimic human cognitive patterns: an intuitive module based on experience and a deliberate module for multifaceted reflexion. These modules together improve the quality of negotiation statements and significantly enhance the agent's negotiation ability.
87
+
88
+ # 3.1 Experience-based Response Module
89
+
90
+ In this module, we aim to enable the agent to provide intuitive responses that are suitable for the current dialogue state. To achieve this, we intend
91
+
92
+ to train a model as the actor to learn from the look-forward strategy chosen and corresponding expression generation, allowing it to make suitable strategic planning and expression. There are two challenges we faced: (1) The lack of high-quality datasets that reflect the look-forward strategy and appropriate expression. and (2) How to learn the look-forward capability?
93
+
94
+ To address these challenges, drawing inspiration from the work of GDP-ZERO (Yu et al., 2023) and DPO (Rafailov et al., 2023), we design the following processes:
95
+
96
+ Data Generation through MCTS. To obtain the look-forward strategy and the corresponding expression data, we build upon the work of GDP-ZERO (Yu et al., 2023) and collect strategy-expression data from the MCTS process, structured in a '[strategy] expression' format. Specifically, we treat dialogue states as nodes in the search tree. To identify the next negotiation strategy, MCTS iteratively performs selection, expansion, evaluation, and backpropagation. Initially, we set the number of iterations to $K$ . In each iteration, the search tree is constructed with values for $Q$ values, state values $V$ , and visit counts $N$ being updated accordingly. After $K$ iterations, MCTS predicts the best negotiation strategy for the current dialogue state based on tree statistics. For details of MCTS, please see Appendix A.
97
+
98
+ Turn-Level Strategy-Expression Preference Data Pairs Construction. To endow the model with look-forward ability in strategy and expression, we construct turn-level strategy-expression preference data pairs for subsequent training.
99
+
100
+ We utilize the $Q$ values to reflect the preference for the next strategy at every turn. Specifically, the strategy with the highest $Q$ value is selected as the optimal choice for the current dialogue turn. Correspondingly, the expression associated with this strategy is chosen with the highest value function score. Together, the selected strategy and its corresponding expression form the positive sample. This pair represents the most favorable outcome given the current dialogue state. Alongside the positive sample, we also construct negative samples to provide a contrastive learning signal. These samples are derived from strategies that are explored during the search but ultimately not selected due to lower $Q$ values. For each unselected strategy, we identify its corresponding expression with the highest value function score. These pairs of uns
101
+
102
+ elected strategies and their associated expression serve as negative samples, indicating less favorable outcomes compared to the positive sample.
103
+
104
+ Preference Learning. Given the turn-level strategy-expression preference data collected via MCTS, we fine-tune the model (e.g., LLaMA) using DPO. The dataset $\mathcal{D} = (x,y_w,y_l)$ is a collection of data items, where each item is represented as a triplet $(x,y_w,y_l)$ . The triplet consists of an input prompt $x$ , a preferred response $y_w$ , and a dispreferred response $y_l$ .
105
+
106
+ $$
107
+ \mathcal {L} _ {\mathrm {d p o}} (\theta) = - \mathbb {E} _ {(x, y _ {w}, y _ {l}) \sim \mathcal {D}} \log \sigma \left(\beta h _ {\pi_ {\mathrm {r e f}}} ^ {\pi_ {\theta}} (x, y _ {w}, y _ {l})\right) \tag {1}
108
+ $$
109
+
110
+ The term $h_{\pi_{\mathrm{ref}}}^{\pi_{\theta}}(x,y_w,y_l)$ quantifies the reward differential between the preferred response and the dispreferred response, defined as:
111
+
112
+ $$
113
+ h _ {\pi_ {\mathrm {r e f}}} ^ {\pi_ {\theta}} (x, y _ {w}, y _ {l}) = \log \frac {\pi_ {\theta} \left(y _ {w} \mid x\right)}{\pi_ {\mathrm {r e f}} \left(y _ {w} \mid x\right)} - \log \frac {\pi_ {\theta} \left(y _ {l} \mid x\right)}{\pi_ {\mathrm {r e f}} \left(y _ {l} \mid x\right)} \tag {2}
114
+ $$
115
+
116
+ This process does not necessarily involve feeding the model with strictly optimal strategies or the most suitable expression. Instead, it aims to learn from past dialogue experiences and develop a capacity for look-forward planning, allowing it to make more informed decisions in new dialogue states. Additionally, compared to the MCTS method, this approach significantly reduces complex search processes and value calculations.
117
+
118
+ # 3.2 Multifaceted Reflexion Module.
119
+
120
+ Due to the complexity of negotiation scenarios that come from diverse dialogue states and different users, expression needs to be optimized from multiple perspectives. Additionally, even after finetuning, the model may still produce unstable outputs, making it insufficient to rely solely on the experience-based response module. To address this, we employ a reflective approach to make fine-grained adjustments to expression derived from the experience-based response module when necessary. Drawing on the Reflexion framework (Shinn et al., 2024), which features dynamic memory and reflective capabilities, we design a multi-critics and moderator reflexion mechanism named Multifaceted Reflexion. This mechanism is more adaptable for enhancing negotiation expression in multiple aspects. As shown in Figure 2, multifaceted reflexion comprises four key components:
121
+
122
+ Multi-Critics. The quality of negotiation expression can be evaluated from multiple perspectives,
123
+
124
+ such as repetitiveness, continuity, richness, empathy, etc. Therefore, using a single-critic approach to assess sentence quality is insufficient. We employ a multi-critics approach to evaluate the current expression. On one hand, this method enhances the exploratory capabilities of critics, allowing for a more comprehensive identification of the shortcomings in the current expression and providing richer feedback. On the other hand, it ensures the quality and stability of the reflexion process, thereby reducing the occurrence of hallucinations and errors.
125
+
126
+ The reflexion module is only activated to rewrite suboptimal expressions, determined by multicritics. If the majority of critics deem the expression quality inadequate, reflexion is triggered.
127
+
128
+ Moderator. To organize and synthesize the diverse evaluations from multi-critics, we introduce a moderator role. The primary function of a moderator is to maintain an objective stance and provide a comprehensive summary that captures the essence of feedback from the critics. This involves: (1) Aggregating and Balancing Opinions. The moderator consolidates the evaluations, highlighting common themes while reconciling conflicting views to ensure a balanced perspective. (2) Enhancing Clarity. By refining complex feedback into clear and actionable insights, the moderator ensures that the summarized evaluations are easily understood and implemented. (3) Facilitating Improvement. The moderator's summary serves as a guide for iterative refinement, bridging the gap between evaluations and the improvement of negotiation sentences.
129
+
130
+ Memory. In order to integrate dynamic memory and iterative reflexion, we introduce the memory structure. It functions as a repository that stores suboptimal negotiation sentences along with their corresponding feedback from the moderator in the current turn. DMNA leverages this memory to enhance the generation of new responses through specific instructions. After reflexion in a certain dialogue turn, the memory is updated and cleared to ensure that it contains only the most relevant and recent information for the current dialogue state. For more details regarding the regeneration of the actor component, please refer to Tables 8 and Table 9 in the Appendix.
131
+
132
+ Actor. The actor is based on the intuitive response model discussed in Section 3.1. Through experience-based learning, this model develops a certain level of negotiation capability. Due to the limitations of experience learning and inherent in
133
+
134
+ stability within the model, it is necessary to impose fine-grained quality constraints on the expression by the actor regenerating.
135
+
136
+ In general negotiation scenarios, the actor can provide a direct response. For complex dialogue states where the initial expression quality is poor, the actor employs the multifaceted reflexion to regenerate response based on refined experiences extracted from the memory. This iterative process improves response performance and better adapts to complex dialogue states.
137
+
138
+ # 4 Experiments
139
+
140
+ # 4.1 Experimental Setups
141
+
142
+ Dataset. To evaluate our framework, we conduct our experiments on the negotiation datasets, including PersuasionForGood (P4G; Wang et al. (2019)) and CraigslistBargain (CB; He et al. (2018)). These negotiation datasets involve two roles with distinct goals and pre-defined negotiation strategies, aiming to reach consensus through conversation. For details of predefined negotiation strategies, please see Appendix D.3. P4G is set in a persuading donation scenario, where the persuader attempts to convince the persuadee to donate to an organization called Save the Children. In contrast, CB is based on a bargaining scenario, the buyer tries to persuade the Seller to accept a lower price, while the seller aims to reach a consensus at a higher price.
143
+
144
+ Baselines. We provide comparisons with two types of negotiation agents: 1) Enhance pre-defined strategies planning (Planning-based), including Pro-CoT (Deng et al., 2023b), TRIP (Zhang et al., 2024) and GDP-Zero (Yu et al., 2023) 2) Optimize negotiation expression (Expression-optimized), including DialogPT (Zhang et al., 2020), ICL-AIF (Fu et al., 2023) and AnE (Zhang et al., 2023). For detailed implementation of the above methods, please refer to the Appendix B.1.
145
+
146
+ Evaluation Metrics. We utilize two types of evaluation methodologies: goal-based metrics and quality-based metrics. In line with Deng et al. (2024b), we employ average turn (AT) and success rate (SR) as goal-based metrics. AT evaluates the number of dialogue turns needed to achieve the negotiation goal, while SR measures the proportion of dialogues that successfully reach the negotiation goal. For the CB, we use the Sale-to-List Ratio (SL) to evaluate the deal of the buyer. A higher SL indicates the buyer gets more benefits from deals and we set SL to 0 if the deal fails. Following Shi et al.
147
+
148
+ <table><tr><td rowspan="2">Agents</td><td colspan="7">Price Negotiation</td></tr><tr><td>AT↓</td><td>SR↑</td><td>SL↑</td><td>N-Rep↑</td><td>Coh↑</td><td>Emp↑</td><td>Pers↑</td></tr><tr><td>ProCoT (Deng et al., 2023b)</td><td>7.62</td><td>0.60</td><td>0.2307</td><td>2.73</td><td>4.32</td><td>3.34</td><td>2.67</td></tr><tr><td>TRIP (Zhang et al., 2024)</td><td>6.34</td><td>0.68</td><td>0.4096</td><td>2.74</td><td>4.76</td><td>3.69</td><td>2.72</td></tr><tr><td>GDP-ZERO (Yu et al., 2023)</td><td>7.63</td><td>0.44</td><td>0.2401</td><td>4.24</td><td>4.78</td><td>3.51</td><td>3.21</td></tr><tr><td>DialoGPT (Zhang et al., 2020)</td><td>6.73</td><td>0.32</td><td>0.2012</td><td>2.70</td><td>4.01</td><td>3.32</td><td>2.62</td></tr><tr><td>LLaMA3.3-70b (Grattaftiori et al., 2024)</td><td>10.26</td><td>0.57</td><td>0.2734</td><td>3.05</td><td>4.86</td><td>3.58</td><td>3.29</td></tr><tr><td>GPT-4o-mini (OpenAI et al., 2024)</td><td>10.02</td><td>0.48</td><td>0.2097</td><td>2.66</td><td>4.71</td><td>3.65</td><td>3.10</td></tr><tr><td>ICL-AIF (Fu et al., 2023)</td><td>8.42</td><td>0.34</td><td>0.2503</td><td>4.25</td><td>4.86</td><td>3.46</td><td>2.94</td></tr><tr><td>AnE (Zhang et al., 2023)</td><td>5.60</td><td>0.34</td><td>0.1742</td><td>4.37</td><td>4.89</td><td>3.46</td><td>3.03</td></tr><tr><td>DMNA (Ours)</td><td>6.80</td><td>0.84</td><td>0.5359</td><td>4.52</td><td>4.82</td><td>3.69</td><td>4.04</td></tr><tr><td rowspan="2">Agents</td><td colspan="7">Persuasion for Good</td></tr><tr><td>AT↓</td><td colspan="2">SR↑</td><td>N-Rep↑</td><td>Coh↑</td><td>Emp↑</td><td>Pers↑</td></tr><tr><td>ProCoT (Deng et al., 2023b)</td><td>9.90</td><td colspan="2">0.18</td><td>2.89</td><td>4.27</td><td>3.56</td><td>3.13</td></tr><tr><td>TRIP (Zhang et al., 2024)</td><td>8.51</td><td colspan="2">0.55</td><td>3.61</td><td>4.82</td><td>4.02</td><td>3.91</td></tr><tr><td>GDP-ZERO (Yu et al., 2023)</td><td>9.74</td><td colspan="2">0.25</td><td>3.92</td><td>4.16</td><td>3.71</td><td>3.77</td></tr><tr><td>DialoGPT (Zhang et al., 2020)</td><td>9.73</td><td colspan="2">0.22</td><td>2.67</td><td>4.51</td><td>3.24</td><td>2.92</td></tr><tr><td>LLaMA3.3-70b (Grattaftiori et al., 2024)</td><td>12.13</td><td colspan="2">0.52</td><td>4.36</td><td>4.80</td><td>4.50</td><td>4.12</td></tr><tr><td>GPT-4o-mini (OpenAI et al., 2024)</td><td>12.86</td><td colspan="2">0.47</td><td>4.09</td><td>4.65</td><td>4.41</td><td>3.78</td></tr><tr><td>ICL-AIF (Fu et al., 2023)</td><td>10.54</td><td colspan="2">0.43</td><td>3.27</td><td>4.66</td><td>3.78</td><td>3.89</td></tr><tr><td>AnE (Zhang et al., 2023)</td><td>10.32</td><td colspan="2">0.46</td><td>3.40</td><td>4.79</td><td>4.07</td><td>3.84</td></tr><tr><td>DMNA (Ours)</td><td>8.14</td><td colspan="2">0.76</td><td>4.43</td><td>4.74</td><td>4.13</td><td>4.14</td></tr></table>
149
+
150
+ Table 1: Evaluation results on Price Negotiation and Persuasion for Good. Compared to the planning-based and expression-optimized baselines, DMNA enhances the negotiation ability of the agent.
151
+
152
+ (2021) and Samad et al. (2022), we assess expression quality based on Non-repetitiveness (N-Rep), Coherence (Coh), Empathy (Emp), and Persuasiveness (Pers) as quality-based metrics. For this purpose, we utilize an evaluation method powered by LLMs, supplemented by human evaluation for validation. Detailed definitions of the evaluation metrics see the Appendix C.1.
153
+
154
+ Experimental Details. To enhance the realism of the negotiation environment, we employ the user simulators in TRIP as comprehensive user simulators. These simulators enable the LLMs to exhibit diverse personas and incorporate resistance strategies to counteract the persuasion attempts of agents. Our implementation of MCTS is based on the GDP-Zero framework, refer to its code and parameters. Moreover, we adopt LLaMA-3-8B-Instruct (Touvron et al., 2023) as the backbone for the experience model and GPT-3.5-turbo-1106 as the backbone for the multifaceted reflexion module.
155
+
156
+ # 4.2 Main Results & Human Evaluation
157
+
158
+ Tables 1 presents the evaluation results of our framework compared with selected baselines on the CB and P4G datasets respectively. Among the baseline methods, planning-based baselines achieve the highest SR and SL, while expression-optimized baselines demonstrate superior perfor
159
+
160
+ mance in expression metrics. This observation aligns with our expectations: planning-based baselines exhibit stronger look-forward capability for the goal, whereas expression-optimized baselines generate higher-quality expression.
161
+
162
+ DMNA effectively improves the negotiation ability of conversation agent, enabled by its dual cognitive mechanism comprising intuitive module and deliberate module. As illustrated in Table 1, DMNA significantly outperforms all baselines across both tasks. Specifically, DMNA achieves a higher SR and fewer AT in task completion, while also attaining superior performance in quality-based metrics. This comprehensive improvement makes DMNA better suited for human-centric conversational agents (Deng et al., 2024a), as it not only efficiently accomplishes tasks but also emphasizes human needs and expectations.
163
+
164
+ The proposed evaluation approach exhibits significant reliability and aligns closely with outcomes from human judgment. In Section 4.1, it is noted that we employ evaluation powered by LLMs. Given the generation of LLMs can be unstable, we further assess the reliability of the evaluation results by comparing them with human annotators. Initially, we sample 50 dialogues generated by DMNA on both CB and P4G datasets. These dialogues are then annotated by 3 human annota
165
+
166
+ ![](images/57cef1660b733c2993a1e11e2caaf5c0e5b79dd6382afa8f271ba2ce5a869336.jpg)
167
+ Figure 3: The results of human A/B test. Each bar shows the ratios for "Ours Wins," "Tie," and "Ours Loss" from left to right.
168
+
169
+ tors using the same standard as those applied by the LLM-based evaluator. Subsequently, we calculate the Spearman correlations between the average value of human annotations and the annotations of the LLMs evaluator. The results, as presented in Table 2, demonstrate a significant consistency, suggesting that LLM-powered evaluators can serve as a viable alternative to human annotators.
170
+
171
+ Moreover, our evaluation results exhibit a high degree of consistency with human judgment. Specifically, we conduct an A/B test comparing expressions generated by DMNA with those from baseline methods, evaluating them based on quality-based metrics. As illustrated in Figure 3, DMNA consistently outperforms other baseline methods, further confirming the effectiveness of our approach.
172
+
173
+ <table><tr><td>Dataset</td><td>N-Rep</td><td>Coh</td><td>Emp</td><td>Pers</td></tr><tr><td>CB (He et al., 2018)</td><td>0.61</td><td>0.52</td><td>0.56</td><td>0.67</td></tr><tr><td>P4G (Wang et al., 2019)</td><td>0.68</td><td>0.62</td><td>0.64</td><td>0.70</td></tr></table>
174
+
175
+ Table 2: The result of Spearman correlation statistics. The Spearman correlations between human evaluation results and our method's evaluation results indicate a strong correlation.
176
+
177
+ <table><tr><td rowspan="2">Agents</td><td colspan="8">Price Negotiation</td></tr><tr><td>AT</td><td>SR</td><td>SL</td><td>N-Rep</td><td>Coh</td><td>Emp</td><td>Pers</td><td>Ref</td></tr><tr><td>DMNA</td><td>6.80</td><td>0.84</td><td>0.5359</td><td>4.52</td><td>4.82</td><td>3.69</td><td>4.04</td><td>22.06</td></tr><tr><td>w/o DPO</td><td>10.68</td><td>0.42</td><td>0.2524</td><td>3.57</td><td>4.11</td><td>3.46</td><td>3.50</td><td>24</td></tr><tr><td>w/o Ref</td><td>10.72</td><td>0.40</td><td>0.2303</td><td>3.00</td><td>3.53</td><td>2.96</td><td>2.69</td><td>-</td></tr><tr><td>w/o MC</td><td>9.80</td><td>0.58</td><td>0.4048</td><td>3.48</td><td>3.98</td><td>3.59</td><td>3.40</td><td>30.3</td></tr><tr><td rowspan="2">Agents</td><td colspan="8">Persuasion for Good</td></tr><tr><td>AT</td><td colspan="2">SR</td><td>N-Rep</td><td>Coh</td><td>Emp</td><td>Pers</td><td>Ref</td></tr><tr><td>DMNA</td><td>8.14</td><td colspan="2">0.76</td><td>4.43</td><td>4.74</td><td>4.13</td><td>4.14</td><td>3.24</td></tr><tr><td>w/o DPO</td><td>9.51</td><td colspan="2">0.66</td><td>3.87</td><td>4.73</td><td>4.01</td><td>4.15</td><td>5.32</td></tr><tr><td>w/o Ref</td><td>8.72</td><td colspan="2">0.67</td><td>4.31</td><td>4.76</td><td>3.92</td><td>3.92</td><td>-</td></tr><tr><td>w/o MC</td><td>9.20</td><td colspan="2">0.70</td><td>4.36</td><td>4.77</td><td>3.95</td><td>4.06</td><td>5.66</td></tr></table>
178
+
179
+ Table 3: The evaluation results of ablation study. The experience-based response module, multifaceted reflexion module, and multi-critics and moderator reflexion mechanism are effective in improving agents and complement each other.
180
+
181
+ # 4.3 Ablation Study
182
+
183
+ To explore the effects of each component in DMNA, we devise several variants as follows:
184
+
185
+ - w/o DPO represents removing the experience-based response module, the backbone of the intuitive module being LLaMA-3-8B-Instruct.
186
+ - w/o Ref represents removing the multifaceted reflexion module, where the model only provides intuitive responses.
187
+ - w/o MC represents removing the multi-critics and moderator reflexion mechanism, using single-critic reflexion instead.
188
+
189
+ We summarize the performance of each model variation. Based on the results in Table 3, we obtain the following observations:
190
+
191
+ The intuitive module, after experience learning, exhibits look-forward planning capability. Specifically, the result illustrates that DMNA outperforms w/o DPO by improving the SR and SL. This suggests that the intuitive module, following experience learning, possesses look-forward proactive and goal-oriented ability. In addition, DMNA reduces the number of reflexion iterations, indicating that the expression of DMNA also has a certain degree of negotiation capability, enabling it to provide intuitive responses.
192
+
193
+ The deliberate module can enhance the quality of expression and further improve negotiation outcomes. In the investigation of DMNA and w/o Ref, we note that DMNA achieves greater improvements in the expression metrics (N-Rep, Coh, Emp, and Pers) of negotiation expression. It indicates that the deliberate module constrains and
194
+
195
+ ![](images/5daeacac20a4dc4e2732cf844e42115c55b95ad32c3186c634a77683feaa1ff8.jpg)
196
+ (a) Frequency of triggering reflexion w.r.t conversation turns in CB.
197
+
198
+ ![](images/ee0affccd24ec5b43b7f5f1feabbc47db84e0b380cebb495e8a0c3f479f18417.jpg)
199
+ (b) Expression quality w.r.t conversation turns in CB.
200
+
201
+ ![](images/733a4158ceffe21b643e162e448a4dec08c5bbc4dcea45a2b9fbcb65c41ff779.jpg)
202
+ (c) Frequency of triggering reflexion w.r.t conversation turns in P4G.
203
+ Figure 4: Frequency of triggering reflexion and expression quality of DMNA in different conversation turns.
204
+
205
+ ![](images/ed0580675afc08efa3d92e0029722cbdafc62116eef61706e8053f9157a27598.jpg)
206
+ (d) Expression quality w.r.t conversation turns in P4G.
207
+ Figure 5: The number of reflexion iterations and expression quality in different persons.
208
+
209
+ optimizes the quality of expression, leading to better negotiation performance (SR and SL).
210
+
211
+ Multifaceted reflexion outperforms reflexion and reduces the number of reflexion iterations. By comparing DMNA and w/o MC, we find that multifaceted reflexion demonstrates higher performances than single-critic reflexion, especially in expression evaluations. It suggests that multifaceted reflexion provides a more comprehensive feedback for expression quality than single-critic. Moreover, multifaceted reflexion reduces the number of reflexion iterations proving that this approach offers greater stability. These findings collectively prove that the multifaceted reflexion not only offers stable feedback but is also better suited for the complex dynamics of dialogue states.
212
+
213
+ # 4.4 In-depth Analysis
214
+
215
+ Frequency of triggering Reflexion and Quality of Expression w.r.t Conversation Turns. In this part, we analyze the variations in the frequency of triggering reflexion and the quality of expression as the conversation progresses. As shown in Figure 4(a) and 4(c), we note that there is an initial gradual increase in the frequency of reflexion, which peaks during the middle turns of the conversation, followed by a significant decline in the later stages. The findings suggest that reflexion plays a crucial role during the early and middle stages of the di
216
+
217
+ ![](images/56dfb1781f6c80bf13d1779ea3018e0d8cf47467ac71d302723b3981ac3658b1.jpg)
218
+ (a) The number of reflexion iterations in DMNA corresponding to different personas.
219
+
220
+ ![](images/f2569801ac430329c51fb740b442a5e6dca08980dcee560de59f61fea61c9235.jpg)
221
+ (b) The performance of expression quality in DMNA across different personas.
222
+
223
+ alogue, while its importance tends to diminish in the later stages.
224
+
225
+ To further analyze the relationship between reflexion frequency and expression quality, we examine different intervals of dialogue turns (0-1, 2-5, 6-10). As illustrated in Figure 4(b) and 4(d), expression quality generally improves with an increase in the number of turns. This suggests that DMNA tends to optimize its expression through reflexion on iterations. For instance, comparing the intervals of dialogue turns 0-1 and 2-5, the average reflexion frequency increase. Correspondingly, we observe that DMNA yields positive improvements across expression metrics, such as a significant increase in Coh, Emp, and Pers. It is worth noting that, as the conversation proceeds, the N-Rep metric exhibits a declining trend. This may be attributed to the agent's increasing focus on topics and expression patterns that may interest the user in order to more effectively advance the negotiation process. These results demonstrate that multiple reflexions can significantly enhance expression quality.
226
+
227
+ Analysis the Adaptability of DMNA across Different Personas. As shown in Figure 5(a) and 5(b), we conduct an in-depth analysis of the quality-based performance of DMNA across different Big-Five Personality (Goldberg, 1992). Specifically, we assess the average iterations number of reflexion and the average value of expression metrics (N-Rep, Coh, Emp, and Pers) for every persona in Big-Five personality types. For example, individuals with high conscientiousness typically demand detailed and well-planned information. This leads to more frequent adjustments and optimizations in DMNA's expression, resulting in the highest number of reflexion among all persona types. However, DMNA's performance in Emp and Pers is relatively
228
+
229
+ weaker for this group. This may be because individuals with high conscientiousness prioritize task completion and accuracy over emotional resonance. The evaluation results indicate that DMNA exhibits the flexibility of the iterations number of reflexion and varying expression depending on the user's persona. We also analyze the adaptability adjustments in the other four persona types. For more details, please see Appendix C.2.
230
+
231
+ # 5 Conclusion
232
+
233
+ In this study, we propose the Dual-Mind Negotiation Agent (DMNA) framework, which integrates strategic planning and expressive optimization to enhance the negotiation ability of agents. Inspired by the dual-process theory in human cognition, DMNA comprises two modules: an intuitive module trained using MCTS and DPO, and a deliberative module that employs a multifaceted reflexion mechanism to optimize expression quality. Our experimental results on two negotiation datasets demonstrate that DMNA significantly outperforms existing state-of-the-art methods. These findings indicate that DMNA effectively bridges the gap between planning-based and expression-based methods, offering a more human-centric approach to negotiation dialogue agents.
234
+
235
+ # Limitation
236
+
237
+ Although the DMNA framework has shown promising results on existing datasets, its performance may be limited in more complex scenarios such as multi-party negotiations, cross-cultural interactions, and multimodal negotiation domains. For instance, in multi-party negotiations, the intertwined interests and diverse strategies of multiple participants require the agent to coordinate and optimize negotiation goals in real time. In cross-cultural negotiations, differences in cultural backgrounds can lead to varying interpretations of the same strategy, demanding stronger cultural adaptability and expressive flexibility from the negotiation agent. Moreover, as negotiations increasingly involve multimodal interactions, DMNA needs to enhance its ability to process non-textual information such as visual and auditory cues to better meet the demands of complex negotiation scenarios.
238
+
239
+ Therefore, our future work will focus on addressing these underexplored challenges by incorporating richer training data, optimizing strategy generation mechanisms, and enhancing multimodal
240
+
241
+ interaction capabilities, thereby further improving DMNA's performance in diverse and complex real-world negotiation settings.
242
+
243
+ # Ethics Statement
244
+
245
+ Our Dual-Mind Negotiation Agent (DMNA) to enhance the effectiveness of dialogue systems in assisting users or systems in accomplishing tasks and goals. We explicitly reject the use of DMNA for unethical purposes such as manipulation or fraud. We are committed to ensuring our work benefits users and society. The risks involved are as follows:
246
+
247
+ Automation bias and user dependence: Systems may have automation bias. Users may overly rely on DMNA's negotiation strategies and expressions. To address this, we clarify that DMNA is designed to assist users in negotiating, not to replace human judgment. Users should critically assess the system's output and make decisions based on their own understanding.
248
+
249
+ Non-zero-sum negotiation dynamics: Negotiation scenarios are not always zero-sum games. The DMNA aims to promote mutual agreements and cooperation among parties. It does not prioritize one party's interests over another's. DMNA focuses on finding common ground and achieving mutually beneficial outcomes.
250
+
251
+ Personality trait measurement bias: Section 4.4 shows performance variations among the 'Big Five' personality types. While the analysis highlights adaptive differences, we do not explicitly address fairness or mitigate potential drawbacks of specific traits. This remains an open challenge.
252
+
253
+ Human annotator conditions: Human annotators are involved in verifying the quality of expressions. They are compensated at a rate of 15 dollars per hour, with tasks limited to 2 hours to prevent fatigue.
254
+
255
+ # Acknowledgements
256
+
257
+ This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No.62476111 and Grant No.72091315, the Changchun Science and Technology Bureau under Grant No. 24GNYZ11, the Department of Science and Technology of Jilin Province, China under Grant No.20230201086GX and Industry University Research Innovation Fund of the Ministry of Education under Grant No.2022XF017.
258
+
259
+ # References
260
+
261
+ Yoshua Bengio. 2019. The consciousness prior.
262
+
263
+ Xiaoxue Cheng, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. 2025. Think more, hallucinate less: Mitigating hallucinations via dual process of fast and slow thinking.
264
+ Yi Cheng, Wenge Liu, Wenjie Li, Jiashuo Wang, Ruihui Zhao, Bang Liu, Xiaodan Liang, and Yefeng Zheng. 2022. Improving multi-turn emotional support dialogue generation with lookahead strategy planning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3014-3026, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
265
+ Yang Deng, Wenqiang Lei, Wai Lam, and Tat-Seng Chua. 2023a. A survey on proactive dialogue systems: Problems, methods, and prospects. arXiv preprint arXiv:2305.02750.
266
+ Yang Deng, Lizi Liao, Liang Chen, Hongru Wang, Wenqiang Lei, and Tat-Seng Chua. 2023b. Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and noncollaboration. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10602-10621, Singapore. Association for Computational Linguistics.
267
+ Yang Deng, Lizi Liao, Zhonghua Zheng, Grace Hui Yang, and Tat-Seng Chua. 2024a. Towards human-centered proactive conversational agents. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 807-818.
268
+ Yang Deng, Wenxuan Zhang, Wai Lam, See-Kiong Ng, and Tat-Seng Chua. 2024b. Plug-and-play policy planner for large language model powered dialogue agents. In The Twelfth International Conference on Learning Representations.
269
+ Ritam Dutt, Sayan Sinha, Rishabh Joshi, Surya Shekhar Chakraborty, Meredith Riggs, Xinru Yan, Haogang Bao, and Carolyn Rose. 2021. ResPer: Computationally modelling resisting strategies in persuasive conversations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 78-90, Online. Association for Computational Linguistics.
270
+ Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. 2023. Improving language model negotiation with self-play and in-context learning from ai feedback. arXiv preprint arXiv:2305.10142.
271
+ Xiaoyang Gao, Siqi Chen, Yan Zheng, and Jianye Hao. 2021. A deep reinforcement learning-based agent for negotiation with multiple communication channels. In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pages 868-872.
272
+
273
+ Lewis R. Goldberg. 1992. The development of markers for the big-five factor structure. Psychological Assessment, 4:26-42.
274
+ Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad AlDahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins. Louis Martin Lovish Madaan Lubo Malo Lukas Blecher Lukas Landzaat Luke de Oliveira Madeline Muzzi Mahesh Pasupuleti Mannat Singh Manohar Paluri Marcin Kardas Maria Tsimpoukelli Mathew Oldham Mathieu Rita Maya Pavlova Melanie Kambadur Mike Lewis Min Si Mitesh Kumar Singh Mona Hassan Naman Goyal Narjes Torabi,Nikolay Bashlykov,Nikolay Bogoychev,Niladri Chatterji Ning Zhang Olivier Duchenne Onur Celebi Patrick Alrassy Pengchuan Zhang Pengwei Li Petar Vasic Peter Weng Prajjwal Bhargava Pratik Dubal Praveen Krishnan,Punit Singh Koura Puxin Xu Qing He,Qingxiao Dong Ragavan Srinivasan Raj GanapathyRamon Calderer,Ricardo Silveira Cabral Robert Stojnic,Roberta Raileanu,Rohan Maheswari Rohit Girdhar Rohit Patel,Romain SauvestreRonnie Polidoro Roshan Sumbaly Ross Taylor Ruan Silva Rui Hou Rui Wang Saghar Hosseini Sahana Chennabasappa Sanjay Singh Sean Bell Seohyun Sonia Kim,Sergey Edunov Shaoliang Nie Sharan Narang Sharath Raparthy Sheng Shen Shengye Wan Shruti Bhosale Shun Zhang Simon Vandenhende Soumya Batra Spencer Whitman Sten
275
+
276
+ Sootla, Stephanie Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vitor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, ChingHsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty. Daniel Kreymer, Daniel Li David Adkins David Xu Davide Testuggine Delia David Devi Parikh Diana Liskovich Didem Foss Dingkang WangDuc Le,Dustin Holland Edward Dowling Eissa Jamil Elaine Montgomery,Eleonora Presani Emily Hahn Emily Wood Eric-Tuan Le Erik Brinkman Esteban Arcaute Evan Dunbar Evan Smothers Fei Sun Felix Kreuk Feng Tian Filippos Kokkinos Firat Ozgenel Francesco Caggioni Frank Kanayet Frank Seide,Gabriela Medina Florez,Gabriella Schwarz. Gada Badeer Georgia Swee Gil Halpern Grant Herman Grigory Sizov Guangyi Zhang Guna Lakshminarayanan Hakan Inan Hamid Shojanazeri Han Zou Hannah WangHanwen Zha Haroun Habeeb Harrison Rudolph Helen Suk Henry Aspegren Hunter Goldman Hongyuan Zhan Ibrahim Damlaj Igor Molybog Igor Tufanov Ilias Leontiadis Irina-Elena Veliche Itai Gat Jake Weissman James Geboski James Kohli Janice Lam Japhet Asher Jean-Baptiste Gaya Jeff Marcus Jeff Tang Jennifer Chan Jenny Zhen Jeremy Reizenstein Jeremy Teboul Jessica Zhong Jian Jin Jingyi Yang Joe Cummings Jon Carvill Jon Shepard Jonathan McPhie Jonathan Torres Josh Ginsburg Junjie Wang Kai Wu Kam Hou U Karan Saxena Kartikay Khandelwal Katayoun Zand Kathy Matosich Kaushik Veeraraghavan Kelly Michelena Keqian Li Kiran Jagadeesh Kun Huang Kunal Chawla Kyle Huang Lailin Chen Lakshya Garg Lavender A
277
+
278
+ Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, British Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Ragtoham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Rudy Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia. Ye Qi,Yenda Li,Yilin Zhang,Ying Zhang,Yossi Adi Youngjin Nam,Yu,Wang,Yu Zhao,Yuchen Hao. Yundi Qian,Yunlu Li,Yuzi He,Zach Rait,Zachary DeVito,Zef Rosnbrick,Zhaoduo Wen,Zhenyu Yang Zhiwei Zhao,and Zhiyu Ma. 2024. The llama 3 herd of models.
279
+
280
+ He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2333-2343, Brussels, Belgium. Association for Computational Linguistics.
281
+
282
+ Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Ming Liu, Zerui Chen, and Bing Qin. 2024. Planning like human: A dual-process framework for dialogue planning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol-
283
+
284
+ ume 1: Long Papers), pages 4768-4791, Bangkok, Thailand. Association for Computational Linguistics.
285
+ Daniel Kahneman. 2003a. Maps of bounded rationality: Psychology for behavioral economics. American economic review, 93(5):1449-1475.
286
+ Daniel Kahneman. 2003b. Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review, 93(5):1449-1475.
287
+ Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, and Xiang Ren. 2023. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. In Thirty-seventh Conference on Neural Information Processing Systems.
288
+ Kshitij Mishra, Azlaan Mustafa Samad, Palak Totala, and Asif Ekbal. 2022. PEPDS: A polite and empathetic persuasive dialogue system for charity donation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 424-440, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
289
+ OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgium, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha GontijoLopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhum Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Lukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar Tabarak Khan, Logan Kilpatrick, Jong Wook Kim Christina Kim Yongjik Kim Jan Hendrik Kirchner Jamie Kiros Matt Knight Daniel Kokotajlo
290
+
291
+ Lukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O'Keefe, Jakub Pachocki, Alex Paine, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya Chelsea Voss, Carroll Wainwright, Justin Jay Wang Alvin Wang Ben Wang Jonathan Ward Jason Wei CJ Weinmann Akila Welihinda Peter Welinder Jiayi Weng Lilian Weng Matt Wiethoff Dave Willner Clemens Winter Samuel Wolrich Hannah Wong Lauren Workman Sherwin Wu Jeff Wu Michael Wu Kai Xiao Tao Xu Sarah Yoo Kevin Yu Qiming Yuan Wojciech Zaremba Rowan Zellers Chong Zhang Marvin Zhang Shengjia Zhao Tianhao Zheng Juntang Zhuang William Zhuk and Barret Zoph. 2024. Gpt-4 technical report.
292
+ Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Thirty-seventh Conference on Neural Information Processing Systems.
293
+ Azlaan Mustafa Samad, Kshitij Mishra, Mauajama Firdaus, and Asif Ekbal. 2022. Empathetic persuasion: Reinforcing empathy and persuasiveness in dialogue systems. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 844-856,
294
+
295
+ Seattle, United States. Association for Computational Linguistics.
296
+
297
+ Weiyan Shi, Yu Li, Saurav Sahay, and Zhou Yu. 2021. Refine and imitate: Reducing repetition and inconsistency in persuasion dialogues via reinforcement learning and human demonstration. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 3478-3492, Punta Cana, Dominican Republic. Association for Computational Linguistics.
298
+
299
+ Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2024. Reflexion: language agents with verbal reinforcement learning. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Red Hook, NY, USA. Curran Associates Inc.
300
+
301
+ Xiaoyu Tian, Liangyu Chen, Na Liu, Yaxuan Liu, Wei Zou, Kaijiang Chen, and Ming Cui. 2023. Duma: a dual-mind conversational agent with fast and slow thinking.
302
+
303
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models.
304
+
305
+ Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5635-5649, Florence, Italy. Association for Computational Linguistics.
306
+
307
+ Changrong Xiao, Wenxing Ma, Qingping Song, Sean Xin Xu, Kunpeng Zhang, Yufang Wang, and Qi Fu. 2024. Human-ai collaborative essay scoring: A dual-process framework with llms.
308
+
309
+ Cheng Yang, Chufan Shi, Siheng Li, Bo Shui, Yujiu Yang, and Wai Lam. 2024. Llm2: Let large language models harness system 2 reasoning.
310
+
311
+ Fu-Hao Yu, Ke-Han Lu, Yi-Wei Wang, Wei-Zhe Chang, Wei-Kai Huang, and Kuan-Yu Chen. 2021. 2020 (a preliminary study of Formosa speech recognition challenge 2020 - Taiwanese ASR). In International Journal of Computational Linguistics & Chinese Language Processing, Volume 26, Number 1, June 2021, Taipei, Taiwan. Association for Computational Linguistics and Chinese Language Processing.
312
+
313
+ Xiao Yu, Maximillian Chen, and Zhou Yu. 2023. Prompt-based Monte-Carlo tree search for goal-oriented dialogue policy planning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7101-7125, Singapore. Association for Computational Linguistics.
314
+
315
+ Haolan Zhan, Yufei Wang, Tao Feng, Yuncheng Hua, Suraj Sharma, Zhuang Li, Lizhen Qu, Zhaleh Semnani Azad, Ingrid Zukerman, and Gholamreza Haffari. 2024. Let's negotiate! a survey of negotiation dialogue systems. arXiv preprint arXiv:2402.01097.
316
+
317
+ Haodi Zhang, Zhichao Zeng, Keting Lu, Kaishun Wu, and Shiqi Zhang. 2022. Efficient dialog policy learning by reasoning with contextual knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):11667-11675.
318
+
319
+ Qiang Zhang, Jason Naradowsky, and Yusuke Miyao. 2023. Ask an expert: Leveraging language models to improve strategic reasoning in goal-oriented dialogue models. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 6665-6694, Toronto, Canada. Association for Computational Linguistics.
320
+
321
+ Tong Zhang, Chen Huang, Yang Deng, Hongru Liang, Jia Liu, Zujie Wen, Wenqiang Lei, and Tat-Seng Chua. 2024. Strength lies in differences! Improving strategy planning for non-collaborative dialogues via diversified user simulation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 424-444, Miami, Florida, USA. Association for Computational Linguistics.
322
+
323
+ Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online. Association for Computational Linguistics.
324
+
325
+ # A Details of MCTS
326
+
327
+ In the process of constructing the strategy-expression preference dataset, we refer to the implementation of GDP-ZERO (Yu et al., 2021). As a supplement to the main body, we describe the implementation details of the MCTS algorithm here. We implement the MCTS algorithm on two datasets respectively. We treat the current dialogue state in turn $i$ look as a sequence of dialogue actions $s_i^{tr} = (a_0,\dots ,a_i)$ and the chosen dialogue strategy as $a^*$ . By iteratively executing the four phases of MCTS, we continuously update the relevant variables: $Q$ -values, state values $V$ , and visit counts $N$ . After $K$ iterations, MCTS selects the optimal strategy based on these variables. Specifically:
328
+
329
+ Selection. For the current node $s^{tr}$ , this phase aims to choose a dialogue strategy from $a^*$ to reach a child node. To balance exploration and expansion, we employ the PUCB function to select the
330
+
331
+ dialogue strategy:
332
+
333
+ $$
334
+ \mathrm {P U C T} \left(s ^ {t r}, a\right) = Q \left(s ^ {t r}, a\right) + c _ {p} \frac {\sqrt {\sum_ {a} N \left(s ^ {t r} , a\right)}}{1 + N \left(s ^ {t r} , a\right)}
335
+ $$
336
+
337
+ The algorithm will keep selecting a child node in sequence until a leaf node is reached.
338
+
339
+ Expansion. Upon reaching a leaf node, we use the LLM as a policy network to prompt it to generate the next dialogue action distribution. This is achieved by sampling the LLM at a temperature of 1.0 for $m$ times and then converting the sampled dialogue acts into a probability distribution. Finally, each strategy is set with $Q(s^{tr},\cdot) = Q_0$ , which is a hyperparameter that influences exploration.
340
+
341
+ Evaluation. We determine the value of a state $a$ based on the likelihood of its dialogue context leading to task success. In the Persuasion for Good task, it is evaluated at convincing a user to donate to a charity, this can be done by adding the utterance "Would you like to make a donation?" to the context. In the CraigslistBargain task, the goal is to negotiate with the Seller to reach a mutually agreeable price, which is evaluated by identifying the negotiated price to context.
342
+
343
+ Backpropagation. After each search iteration concludes, we perform the following updates for the above variables:
344
+
345
+ $$
346
+ N \left(s ^ {t r}, a\right) \leftarrow N \left(s ^ {t r}, a\right) + 1
347
+ $$
348
+
349
+ $$
350
+ Q \left(s ^ {t r}, a\right) \leftarrow Q \left(s ^ {t r}, a\right) + \Delta Q \left(s ^ {t r}, a\right)
351
+ $$
352
+
353
+ $$
354
+ \Delta Q \left(s ^ {t r}, a\right) = \frac {v \left(s ^ {t r} - Q \left(s ^ {t r} , a\right)\right)}{N \left(s ^ {t r} , a\right)}
355
+ $$
356
+
357
+ After all simulations, we select the optimal strategy for the current state $s^{tr}$ using a formula:
358
+
359
+ $$
360
+ a ^ {*} = \arg \max _ {a} N \left(s _ {0} ^ {t r}, a\right)
361
+ $$
362
+
363
+ # B More Implementation Details
364
+
365
+ # B.1 Implementation of Baselines
366
+
367
+ We follow the original design of the baseline methods and categorize them into planning-based agents and expression-based agents. To compare the two types of agents with DMNA, we adapt these agents to the applications in our experiments for two tasks:
368
+
369
+ ProCoT (Planning-based): Following Deng et al. (2023b), we instruct the LLMs(e.g., gpt-3.5-turbo-1106) to analyze the current dialogue context, choose the next strategy, and generate a response accordingly.
370
+
371
+ TRIP (Planning-based): Following Zhang et al. (2024), We implement TRIP based on the details provided in the paper, utilizing a user-aware strategic planning module and a population-based training paradigm to enhance the adaptability of dialogue agents to diverse users.
372
+
373
+ GDP-ZERO (Planning-based): Following Yu et al. (2023), we leverage the open-MCTS method to enable strategic planning by LLMs. Specifically, we utilize a large language model (e.g., ChatGPT) to serve as a policy prior, value function, user simulator, and system model during the tree search process.
374
+
375
+ DiaLoGPT (Expression-based): DiaLoGPT is a widely used model in the field of dialogue systems, known for its strong performance in generating coherent and contextually relevant responses(Zhang et al., 2020). We instruct DiaLoGPT-large as a negotiation agent to chat with the user in two tasks.
376
+
377
+ ICL-AIF (Expression-based): Following Fu et al. (2023), we prompt GPT3.5 to provide verbal feedback, offering suggestions to the dialogue agent at the end of each interaction. Specifically, our implementation includes presenting three suggestions after each interaction, ensuring that only the most recent 20 suggestions are retained to prevent excessive accumulation.
378
+
379
+ AnE (Expression-based): Following Zhang et al. (2023), we prompt GPT3.5 to act as a negotiation expert by posing M-part questions that guide reasoning about the next response suggestion.
380
+
381
+ # B.2 Implementation of Training
382
+
383
+ We use LLaMA-3-8B-Instruct as our base pre-train model. The DPO experiment is conducted with a 24G GPU(NVIDIA RTX4090). We choose the learning rate 5e-6 for DPO training, with a cosine learning rate scheduler. The training epoch is 3. The maximum sequence length of models is 512. We train the model with a batch size of 1. Follow the DPO paper(Rafailov et al., 2023) to set the KL constraint parameter as 0.1. Each sample in DPO is a set of step-level preference data decomposed by MCTS. We set the number of MCTS iterations as $K = 10$ for two tasks.
384
+
385
+ # C More Details of Evaluation
386
+
387
+ # C.1 Definitions of quality-based metrics
388
+
389
+ To assess the quality of expression during the negotiation process, we refer to Shi et al. (2021) and Samad et al. (2022), establish four metrics for eval
390
+
391
+ uation. Each of the four metrics is evaluated on a five-point scale:
392
+
393
+ Non-repetition (N-Rep): Non-repetition measures the diversity and uniqueness of the expression generated by the negotiation agent. It evaluates whether the agent can produce a variety of responses instead of repeating the same phrases or sentences.
394
+
395
+ Coherence (Coh): Coherence assesses the logical flow and consistency of the agent's responses within the conversation. It ensures that the agent's statements are relevant, connected, and make sense in the context of the ongoing dialogue.
396
+
397
+ Empathy (Emp): Empathy evaluates the agent's ability to understand and share the feelings of the counterpart. It measures how well the agent can respond in a way that demonstrates emotional intelligence and sensitivity.
398
+
399
+ Persuasiveness (Pers): Persuasiveness measures the agent's ability to influence the counterpart's opinions or decisions. It evaluates how effectively the agent can use language and arguments to persuade the counterpart to agree with its proposals or suggestions.
400
+
401
+ # C.2 Analysis of the Adaptability of Different Personas
402
+
403
+ In Section 4.4, we conclude that DMNA demonstrates strong adaptability across different personas. For users with conscientiousness, we provided an analysis in the main body. As a supplement, we offer here an analysis of its adaptability to the other four personality types:
404
+
405
+ Openness: As shown in Figure 5(a) and 5(b), individuals with high openness require a moderate number of reflexions (17.71) to optimize expression. When interacting with openness individuals, DMNA generates expression with higher Coh and Pers but shows slightly weaker Emp. This may be because openness users are more willing to accept new perspectives and strategies, allowing DMNA to adapt with fewer reflexions.
406
+
407
+ Extraversion: Extraversion individuals require a moderate number of reflexions (21.78) to optimize expression. When interacting with extraversion individuals, DMNA generates expression with higher Coh and Pers. Extraversion individuals typically value interaction and energy, and DMNA can meet these needs with an appropriate number of reflexions while maintaining Coh and Pers.
408
+
409
+ Agreeableness: Agreeableness individuals re
410
+
411
+ quire the fewest reflexions (12.57), indicating that DMNA can quickly generate expressions that meet their expectations. Agreeableness individuals focus more on cooperation and empathy, and DMNA can rapidly produce expression with high N-Rep and Coh. Although the value of Pers is average, agreeableness individuals may prioritize cooperation and emotional resonance.
412
+
413
+ Neuroticism: Neuroticism individuals require a relatively high number of reflexions (22.22), indicating that DMNA needs to frequently adjust and optimize expression when interacting with them. Neuroticism individuals experience higher emotional volatility, and DMNA needs multiple reflexions to alleviate their anxiety. However, their expression quality is relatively weak across all dimensions, likely because neuroticism users have lower adaptability to stress and challenges and require more support and reassurance.
414
+
415
+ # D Details of Prompting
416
+
417
+ # D.1 User Simulation
418
+
419
+ Due to the human involving conversations is time-consuming, we resort to role-playing to simulate users with LLMs. To make the negotiation scenarios more realistic, we employ a user simulator with comprehensive prompts which are endowed with different personas, resistance strategies, and task descriptions. We draw upon the prompts of the user simulator from TRIP (Zhang et al., 2024).
420
+
421
+ Specifically, for persona, we sample one attribute from the Big Five personality and one from Decision-Making Styles, serving as a set of basic persona for the user. In total, we sample 20 sets of personas and ensure the balance of each attribute. Then, we utilize GPT4 to generate 300 specific task descriptions for sampling into the user role-playing prompt. Regarding resistance strategies, we adopt those from Dutt et al. (2021) as user behaviors and integrate them into the user role-playing prompt.
422
+
423
+ The comprehensive prompt encompasses several parts: the background of the task, conversation history, user personality, resistance strategy, and specific prompts used in two tasks, which can be found in Tables 6 and 7.
424
+
425
+ # D.2 Assistant Simulation
426
+
427
+ In the intuitive module, we prompt the actor to respond to the current dialogue state through roleplaying. The template content includes the background of the task, conversation history, dialogue
428
+
429
+ strategies, and previous dialogue experience (have if regenerated after reflexion). The prompts used in the two tasks can be seen in Tables 8 and 9.
430
+
431
+ # D.3 Details of Strategy
432
+
433
+ Here, we present the negotiation strategies used in the two tasks. In the CB task, there are 11 strategies involved. In the P4G task, 10 strategies are involved. For detailed negotiation strategies and their descriptions, see Tables 4 and 5.
434
+
435
+ # D.4 Details of Component in Multifaceted Reflexion Module
436
+
437
+ This part describes the relevant prompt content used in the multifaceted reflexion module. The multifaceted reflexion module involves roles such as multi-critics, monitor, and actor. The specific roles of these characters be introduced in the main body, and the specific prompt content for each role can be found in Tables 10, 11, and 12 respectively.
438
+
439
+ <table><tr><td>Negotiation Strategy</td><td>Explanation</td></tr><tr><td>Greetings</td><td>Please say hello or chat randomly.</td></tr><tr><td>Ask a question</td><td>Please ask any question about product, year, price, usage, etc.</td></tr><tr><td>Answer a question</td><td>Please provide information about the product, year, usage, etc.</td></tr><tr><td>Propose the first price</td><td>Please initiate a price or a price range for the product.</td></tr><tr><td>Propose a counter price</td><td>Please propose a new price or a new price range.</td></tr><tr><td>Use comparatives</td><td>Please propose a vague price by using comparatives with existing price.</td></tr><tr><td>Confirm information</td><td>Please ask a question about the information to be confirmed</td></tr><tr><td>Affirm confirmation</td><td>Please give an affirmative response to a confirm.</td></tr><tr><td>Deny confirmation</td><td>Please give a negative response to a confirm.</td></tr><tr><td>Agree with the proposal</td><td>Please agree with the proposed price.</td></tr><tr><td>Disagree with a proposal</td><td>Please disagree with the proposed price.</td></tr></table>
440
+
441
+ Table 4: The negotiation strategies which DMNA employs in CB.
442
+
443
+ <table><tr><td>Negotiation Strategy</td><td>Explanation</td></tr><tr><td>Logical Appeal</td><td>Please use of reasoning and evidence to convince the persuadee.</td></tr><tr><td>Emotion Appeal</td><td>Please elicit the specific emotions to influence the persuadee.</td></tr><tr><td>Credibility Appeal</td><td>Please use credentials and cite organizational impacts to establish credibility and earn the user&#x27;s trust. The information usually comes from an objective source (e.g., the organization&#x27;s website or other well-established websites).</td></tr><tr><td>Foot in the Door</td><td>Please use the strategy of starting with small donation requests to facilitate compliance followed by larger requests.</td></tr><tr><td>Self-Modeling</td><td>Please use the self-modeling strategy where you first indicates the persuadee own intention to donate and chooses to act as a role model for the persuadee to follow.</td></tr><tr><td>Personal Story</td><td>Please use narrative exemplars to illustrate someone donation experiences or the beneficiaries positive outcomes, which can motivate others to follow the actions.</td></tr><tr><td>Donation Information</td><td>Please provide specific information about the donation task, such as the donation procedure, donation range, etc. By providing detailed action guidance, this strategy can enhance the persuadee&#x27;s self-efficacy and facilitates behavior compliance.</td></tr><tr><td>Source-related Inquiry</td><td>Please ask if the persuadee is aware of the organization (i.e., the source in our specific donation task).</td></tr><tr><td>Task-related Inquiry</td><td>Please ask about the persuadee opinion and expectation related to the task, such as their interests in knowing more about the organization.</td></tr><tr><td>Personal-related Inquiry</td><td>Please asks about the persuadee previous personal experiences relevant to charity donation.</td></tr></table>
444
+
445
+ Table 5: The negotiation strategies which DMNA employs in P4G.
446
+
447
+ # The user simulator prompt for CB
448
+
449
+ Now enter the role-playing mode. In the following conversation, you will play as a seller in a price bargaining game.
450
+
451
+ Your persona: $\%$ s. You must follow the instructions below during chat.
452
+
453
+ 1. Your utterances and bargain behavior need to strictly follow your persona. Varying your wording and avoid repeating yourself verbatim!
454
+ 2. You can decide to change your target price flexibly based on your persona and the conversation.
455
+
456
+ Here are some conversation strategies you can follow:
457
+
458
+ 1. "Source Derogation": Attacks the other party or questions the item.
459
+ 2. "Counter Argument": Provides a non-personal argument/factual response to refute a previous claim or to justify a new claim.
460
+ 3. "Personal Choice": Provides a personal reason for disagreeing with the current situation or chooses to agree with the situation provided some specific condition is met.
461
+ 4. "Information Inquiry": Requests for clarification or asks additional information about the item or situation.
462
+ 5. "Self Pity": Provides a reason (meant to elicit sympathy) for disagreeing with the current terms.
463
+ 6. "Hesitance": Stalls for time and is hesitant to commit; specifically, they seek to further the conversation and provide a chance for the other party to make a better offer
464
+ 7. "Self-assertion": Asserts a new claim or refutes a previous claim with an air of finality/ confidence.
465
+ 8. "Others": Do not explicitly foil the negotiation attempts.
466
+
467
+ You are the seller who is trying to sell the %s with the initial price of %s. Product description: %s. Please reply with only one short and succinct sentence.
468
+
469
+ \*\*\*\*\*\*\*\*
470
+
471
+ Conversation History
472
+
473
+ \*\*\*\*\*\*\*\*
474
+
475
+ Table 6: The comprehensive prompt of user simulators in CB.
476
+
477
+ # The user simulator prompt for P4G
478
+
479
+ Now enter the role-playing mode. In the following conversation, you will play as a Persuadee in a persuasion game.
480
+
481
+ Your persona: $\%$ s. You must follow the instructions below during chat.
482
+
483
+ 1. Your utterances need to strictly follow your persona. Varying your wording and avoid repeating yourself verbatim!
484
+ 2. Pretend you have little knowledge about the Save the Children charity. You have little willingness for donation at the beginning of conversation.
485
+ 3. Your willingness for donation depends on your persona and how influenced you are by the Persuader.
486
+ 4. You need to determine whether to donate money or not. If you think your are convinced by the Persuader, you should donate money for the charity.
487
+
488
+ # Your Response Strategy:
489
+
490
+ 1. "Donate": show your willingness to donate.
491
+ 2. "Source Derogation": attacks or doubts the organisation's credibility.
492
+ 3. "Counter Argument": argues that the responsibility is not on them or refutes a previous statement.
493
+ 4. "Personal Choice": Attempts to save face by asserting their personal preference such as their choice of charity and their choice of donation.
494
+ 5. "Information Inquiry": Ask for factual information about the organisation for clarification or as an attempt to stall.
495
+ 6. "Self Pity": Provides a self-centred reason for not being willing to donate at the moment.
496
+ 7. "Hesitance": Attempts to stall the conversation by either stating they would donate later or is currently unsure about donating.
497
+ 8. "Self-assertion": Explicitly refuses to donate without even providing a personal reason.
498
+ 9. "Others": Do not explicitly foil the persuasion attempts.
499
+
500
+ You are the Persuadee who is being persuaded by a Persuader. Please reply with only one short and succinct sentence.
501
+
502
+ \*\*\*\*\*\*\*\*
503
+
504
+ Conversation History
505
+
506
+ \*\*\*\*\*\*\*\*
507
+
508
+ Table 7: The comprehensive prompt of user simulators in P4G.
509
+
510
+ # The prompt of DMNA actor in CB
511
+
512
+ Now enter the role-playing mode. In the following conversation, you will play as a buyer in a price bargaining game. You should choose among the following actions during a conversation to response to the Seller: $\% \mathrm{s}$ . Please chat with the Seller using short and natural sentence.
513
+
514
+ You are the buyer who is trying to buy the %s with the price of %s. Product description: %s. Choose a strategy from the list provided and provide a corresponding response based on the Conversation Context.
515
+
516
+ Please reply with only one short and succinct sentence.
517
+
518
+ Conversation Context:%s.
519
+
520
+ (If is Actor after multifaced reflexion: In the context of the conversation above, here are some experience poor expression quality, along with corresponding reflections: %s.) Please output the appropriate and high quality response and format your answer as '[strategy]response'. Your Response:
521
+
522
+ Table 8: The prompt design of the DMNA actor for CB.
523
+
524
+ # The prompt of DMNA actor in P4G
525
+
526
+ Now enter the role-playing mode. In the following conversation, you will play as a Persuader who is trying to persuade the Persuadee to donate to the charity called Save the Children. Save the Children is head-quartered in London, and they work to help fight poverty around the world. Children need help in developing countries and war zones. Small donations like $1 or$ 2 go a long way to help. You should choose among the following actions during a conversation to respond to the Persuader: %s. Please chat with the Persuadee using short and natural sentence.
527
+
528
+ As the Persuader, in order to persuade the Persuadee to donate to a charity called Save the Children, choose a strategy from the list provided and provide a corresponding response based on the Conversation Context. Please chat with the Persuadee using short and natural sentence.
529
+
530
+ Conversation Context:%s. (If is Actor after multifaced reflexion: In the context of the conversation above, here are some experience poor expression quality, along with corresponding reflections: %s.) Please output the appropriate and high quality response and format your answer as '[strategy]response'. Your Response:
531
+
532
+ Table 9: The prompt design of the DMNA actor for P4G.
533
+
534
+ # The prompt of DMNA Multi-Critics in CB
535
+
536
+ You are the critic for a conversation. Your task is to perform a fine-grained analysis of the Buyer's latest response in current communication. Determine can the Buyer's latest response positively influence the progression of future bargain efforts to negotiate down the Seller's price?
537
+
538
+ You can consider the following example aspects in your analysis:
539
+
540
+ 1. Whether the Buyer maintains a polite and respectful tone throughout the conversation, even when disagreements arise.
541
+ 2. Does the price given conform to the bargain logic? The Buyer's price should be more and more to reach an agreement with the seller.
542
+ 3. Whether the Buyer offers different angles or reasons for their request, rather than repeating the same point.
543
+
544
+ Please format your response as: Answer: Yes or No. (If No) Suggestion: your concrete suggestion. The following is the conversation: %s.
545
+
546
+ Question: Does the Buyer's latest response positively influence the progression of future bargain?
547
+
548
+ Table 10: The prompt design of the Multi-Critics for CB.
549
+
550
+ # The prompt of DMNA Multi-Critics in P4G
551
+
552
+ You are the critic for a conversation. Your task is to perform a fine-grained analysis of the Persuader's current communication. Determine if these expression can positively influence the progression of future persuasion efforts to persuade Persuadee donate.
553
+
554
+ You can consider the following example aspects in your analysis:
555
+
556
+ 1. Whether address the Persuadee's expressed needs and concerns.
557
+ 2. Whether lack of empathy and trust with the Persuadee.
558
+ 3. Whether keep open and respectful in the communication.
559
+ 4. Whether the Persuader's last response is similar to previous turn, lack of initiative and richness.
560
+
561
+ Please format your response as: Answer: Yes or No. (If No)Suggestion: your concrete suggestion.
562
+
563
+ The following is the conversation: $\%$ s.
564
+
565
+ Question: Does the Persuader's latest response positively influence the progression of future persuasion?
566
+
567
+ Table 11: The prompt design of the Multi-Critics for P4G.
568
+
569
+ # The prompt of DMNA Moderator
570
+
571
+ You are a reflection craft proposer. Your task is summarize the ideas that have been presented into a draft designed to satisfy the maximum number of agents. Below is the ideas from num agents: reflectionsdraft of reflection:
572
+
573
+ Table 12: The prompt design of the Moderator.
2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f488615dc7a54bfc459f38ea48c198cef308a161c81ffb64449dd484cff5339b
3
+ size 763835
2025/A Dual-Mind Framework for Strategic and Expressive Negotiation Agent/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/42bedb8b-dcb2-403e-921a-8f9f3747a4f4_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/42bedb8b-dcb2-403e-921a-8f9f3747a4f4_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/42bedb8b-dcb2-403e-921a-8f9f3747a4f4_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b5de9acf22117ff4e1e4252131d3f0555645b2c73e357f20450aa7a0203d338
3
+ size 611338
2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ff2c3f36a9de3b5a567844b3a43b41253c29e5e550d669f31733c5631757b29
3
+ size 2102580
2025/A Dual-Perspective NLG Meta-Evaluation Framework with Automatic Benchmark and Better Interpretability/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/e5ae8e91-b92f-4f53-8664-78e078ceb4ef_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/e5ae8e91-b92f-4f53-8664-78e078ceb4ef_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/e5ae8e91-b92f-4f53-8664-78e078ceb4ef_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b3295cdb6f8972d89f640f893d620d51a18e3336332c15dac4d2d6cce57ac7e
3
+ size 2184323
2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/full.md ADDED
@@ -0,0 +1,482 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning
2
+
3
+ Zhiyu Zhang $^{1,3}$ , Wei Chen $^{2*}$ , Youfang Lin $^{1,3}$ , Huaiyu Wan $^{1,3}$
4
+
5
+ $^{1}$ School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
6
+
7
+ <sup>2</sup>Guilin University of Electronic Technology,
8
+
9
+ School of Computer Science and Information Security, Guangxi, China
10
+
11
+ <sup>3</sup>Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing, China
12
+
13
+ {zyuzhang, yflin, hywan}@bjtu.edu.cn, w_chen@guet.edu.cn
14
+
15
+ # Abstract
16
+
17
+ Recent Continual Learning (CL)-based Temporal Knowledge Graph Reasoning (TKGR) methods focus on significantly reducing computational cost and mitigating catastrophic forgetting caused by fine-tuning models with new data. However, existing CL-based TKGR methods still face two key limitations: (1) They usually one-sidedly reorganize individual historical facts, while overlooking the historical context essential for accurately understanding the historical semantics of these facts; (2) They preserve historical knowledge by simply replaying historical facts, while ignoring the potential conflicts between historical and emerging facts. In this paper, we propose a Deep Generative Adaptive Replay (DGAR) method, which can generate and adaptively replay historical entity distribution representations from the whole historical context. To address the first challenge, historical context prompts as sampling units are built to preserve the whole historical context information. To overcome the second challenge, a pre-trained diffusion model is adopted to generate the historical distribution. During the generation process, the common features between the historical and current distributions are enhanced under the guidance of the TKGR model. In addition, a layer-by-layer adaptive replay mechanism is designed to effectively integrate historical and current distributions. Experimental results demonstrate that DGAR significantly outperforms baselines in reasoning and mitigating forgetting.
18
+
19
+ # 1 Introduction
20
+
21
+ Temporal Knowledge Graphs (TKGs) extend traditional Knowledge Graphs (KGs) by associating triples with timestamps (Leblay and Chekol, 2018; Dasgupta et al., 2018; Lacroix et al., 2020), providing dynamic and structured time-sensitive knowledge for various downstream applications (Chen
22
+
23
+ et al., 2023; Gutiérrez et al., 2024; Wang et al., 2024; Zhao et al., 2025), such as Large Language Models reasoning, event prediction, and financial forecasting (Guan et al., 2022). Unfortunately, TKGs often suffer from incompleteness, hindering the capability of dynamic knowledge representation in downstream applications. Temporal knowledge Graph Reasoning (TKGR) is proposed to address this issue by inferring missing temporal facts based on historical knowledge.
24
+
25
+ In real-world scenarios, TKGs are continuously updated with unseen entities, relations, and new facts. Existing TKGR studies (Leblay and Chekol, 2018; Li et al., 2021, 2022b; Xu et al., 2023a) update model parameters by retraining on the entire TKG when new data arrives. This process is computationally expensive and impractical for dynamic settings, especially in the transportation and finance domains where frequent knowledge updates are required (Liu et al., 2024a). Continual Learning(CL) fine-tuning models with new data may seem intuitive, but often results in catastrophic forgetting, where prior knowledge is lost (Mirtaheri et al., 2023).
26
+
27
+ To mitigate catastrophic forgetting, recent CL-based TKGR studies (Wu et al., 2021; Mirtaheri et al., 2023) rely on the mechanisms of replaying prior knowledge, further employing regularization techniques to preserve old knowledge. These studies integrate new knowledge while preserving previously acquired information, thus enabling reasoning over both historical and emerging data.
28
+
29
+ Despite notable progress, the currently CL-based TKGR methods still face two primary challenges:
30
+
31
+ (1) These methods often reorganize and replay the historical data (e.g., based on frequency or clustering) to mitigate catastrophic forgetting. However, such methods solely focus on the statistical properties of individual historical events, which fails to correctly understand the historical semantics of these facts combining the necessary histori
32
+
33
+ ![](images/abe49e5eaaaccf321d8473d8744f1499cd68b4cc10baafb27f2e5b17a15b87c3.jpg)
34
+ Figure 1: The distributions of the same set of entity features at different timestamps are visualized using U-MAP visualization. Entities involved in all facts at a randomly selected time $i$ are extracted. Stage 1 represents the feature distribution of these entities at time $i$ , while stage 2, 3, and 4 respectively correspond to their feature distributions at times $j$ , $m$ and $n$ , where $i < j < m < n$ . (a) depicts the distributions learned by the base model across these timestamps, and (b) shows the distributions learned by DGAR. The results demonstrate that our approach effectively resolves distribution conflicts while preserving historical knowledge.
35
+
36
+ ical context. Besides, this fragmented approach makes it difficult to capture the overarching trends of entity behavior, thus limiting the model's capacity in complex reasoning tasks.
37
+
38
+ (2) Current approaches typically replay historical data directly overlooking potential conflicts between the distributions of historical and current data (e.g., Figure 1). As entities associate with different neighbors over time, semantic differences arise, which in turn cause conflicts in the distributions of entities at different times. This oversight hinders the effectiveness of mitigating catastrophic forgetting.
39
+
40
+ To address these challenges, we propose a Deep Generative Adaptive Replay (DGAR) method for TKGR, which can continually and adaptively replay historical information by generating the historical distribution representation of entities from the whole historical context. For the first challenge, instead of using individual facts as sampling units, we build Historical Context Prompts (HCPs) as sampling units to retain the context information of historical data. For the second challenge, we enhance the common features across different distributions and introduce a deep adaptive replay mechanism to mitigate distribution conflicts. Specifically, we design a Diffusion-Enhanced Historical Distribution Generation (Diff-HDG) strategy that generates entity historical distribution representations. During the generation process, the features of the entity's historical distribution that are common to
41
+
42
+ the entity's current distribution are enhanced. In addition, a layer-by-layer Deep Adaptive Replay (DAR) mechanism is introduced to inject the entity's historical distribution representation into its current distribution representation.
43
+
44
+ In summary, the main contributions of this work are as follows:
45
+
46
+ - We propose a novel Generative Adaptive Replay Continual Learning method for TKGR, which effectively addresses the issue of knowledge forgetting by incorporating the entire historical context and mitigating distribution conflicts.
47
+ - A sophisticated historical context prompt is designed for replay data sampling, ensuring the semantic integrity of the historical context information in the sampled facts.
48
+ - A Diff-HDG strategy is proposed to generate historical distribution representations by enhancing the common features. In addition, a DAR mechanism is designed to efficiently integrate historical and current distributions.
49
+ - Extensive experiments conducted on widely used TKGR datasets demonstrate the superiority of our approach, consistently outperforming all baseline methods across various metrics.
50
+
51
+ # 2 Related Work
52
+
53
+ # 2.1 Reasoning on TKGs
54
+
55
+ TKGR aims to infer missing facts by utilizing known facts. Recent advancements in this field fall into four main approaches. The distribution-based approaches (Leblay and Chekol, 2018; Lacroix et al., 2020) perform reasoning by training a scoring function that can evaluate the distance or semantic similarity between entities. The Graph Neural Network (GNN)-based methods (Li et al., 2021, 2022c; Xu et al., 2023b; Chen et al., 2024b; Wu et al., 2023; Chen et al., 2024c) capture structural and temporal patterns in graph sequence to enhance reasoning accuracy. Rule-based temporal knowledge graph reasoning methods (Huang et al., 2024; Chen et al., 2024a) follow a symbolic paradigm that emphasizes interpretability, logical consistency, and low resource requirements. These methods typically mine temporal logical rules from historical fact sequences, then use these rules to
56
+
57
+ infer future events or fill in missing historical facts. When new data arrives, these methods often require retraining. Given the strong performance of GNN-based methods in TKGR, our approach builds upon this category of methods.
58
+
59
+ # 2.2 CL for Knowledge Graphs
60
+
61
+ Compared to existing approaches that necessitate repeated retraining, CL adaptively incorporates sequentially evolving knowledge. Recently, several methods have applied CL to knowledge graph embedding (KGE) and TKGR. For instance, some approaches (Wu et al., 2021; Mirtaheri et al., 2023) integrate experience replay with regularization techniques to address catastrophic forgetting in TKGR. TIE's (Wu et al., 2021) overly restrictive regularization leads to a decline in overall performance. The regularization method restricts the applicability of DEWC (Mirtaheri et al., 2023) to a limited number of tasks. (Cui et al., 2023; Liu et al., 2024a,b) apply CL to KGE by employing regularization constraints to retain historical knowledge, effectively mitigating catastrophic forgetting.
62
+
63
+ # 2.3 Diffusion Models
64
+
65
+ Diffusion models are generative frameworks that reconstruct structured data from Gaussian noise through a stepwise reverse denoising process (Sohl-Dickstein et al., 2015; Ho et al., 2020). In continuous domains like image synthesis, DDPM and its variants effectively model complex distributions and generate high-quality outputs (Ho et al., 2020; Rombach et al., 2022). Applying diffusion models to discrete domains is challenging due to Gaussian noise's incompatibility with discrete structures. Text generation employs polynomial diffusion or continuous-to-discrete mapping to link continuous processes with discrete data (Austin et al., 2021; Gong et al., 2022; Li et al., 2022a). In KGs, (Long et al., 2024; Cai et al., 2024) restore knowledge from noise by mapping discrete KG data to a continuous space and applying conditional constraints.
66
+
67
+ # 3 Preliminaries
68
+
69
+ # 3.1 The Task of TKGR
70
+
71
+ TKG can be represented as a sequence of snapshots partitioned by time, denoted as $\mathcal{G} = \{G_1,G_2,G_3,\dots,G_T\}$ . Each snapshot $G_{t} = (\mathcal{V},\mathcal{R},\mathcal{F}_{t})$ is a directed multi-relational graph at timestamp $t$ . $(s,r,o,t)\in \mathcal{F}_t$ is denoted as a fact, where $s\in \mathcal{V}$ and $o\in \mathcal{V}$ are subject entity and object entity, respectively, $r\in \mathcal{R}$ is denoted as
72
+
73
+ a relation that connects the subject entity and the object entity.
74
+
75
+ The task of TKGR aims to predict the missing object entity (or subject entity) given a query $(s_q, r_q,?, t_q)$ or $(?, r_q, o_q, t_q)$ . To be consistent with common representation, the inverse quadruple of a fact $(s, r, o, t)$ is $(o, r^{-1}, s, t)$ which is added to the dataset. The TKG reasoning goal can be expressed as the prediction of object entities.
76
+
77
+ # 3.2 Continual Learning for TKGR
78
+
79
+ Under the CL setting, TKGs can be viewed as a sequence of KG snapshots arriving as a stream over time. A set of tasks can be denoted as $\{\mathcal{T}_1,\mathcal{T}_2,\dots,\mathcal{T}_T\}$ , where each task is denoted as $\mathcal{T}_t = (D_{train}^t,D_{valid}^t,D_{test}^t)$ , where $G_{t} = [D_{train}^{t}:D_{valid}^{t}:D_{test}^{t}]$ . The model parameters are updated sequentially for each task as the task stream $\{\mathcal{T}_1,\mathcal{T}_2,\dots,\mathcal{T}_T\}$ arrives. The trained model parameters at each step can be represented as $\{\theta_1,\theta_2,\dots,\theta_T\}$ . At time $t$ , the parameters $\theta_t$ are initialized by the parameters $\theta_{t - 1}$ at the previous time. Then the model is trained on $D_{train}^{t}$ to update the parameters.
80
+
81
+ During CL for TKGR, we mitigate catastrophic forgetting based on KG snapshot sequence reasoning models, such as RE-GCN (Li et al., 2021). We focus primarily on entity representations, as the semantics of entities tend to evolve more frequently over time, in contrast to the relatively negligible changes in the semantics of relations (Goel et al., 2020).
82
+
83
+ # 3.3 Denoising Diffusion Probabilistic Model
84
+
85
+ The Diffusion Model (DM) consists of a forward diffusion process and a reverse diffusion process. In the forward process, a continuous DM is adapted to handle discrete facts $\mathcal{G}$ . Given discrete data $x$ , we first project $x$ into a continuous embedding, denoted as $X_0 = \operatorname{Embedding}(x)$ , where $X_0 \in \mathbb{R}^d$ . Embedding $(\cdot)$ is a function that can map a word to a vector in $\mathbb{R}^d$ . Then a Markov chain of latent variables $X_1, X_2, \ldots, X_n$ is generated in the forward process by gradually adding small amounts of standard Gaussian noise to the sample. This process can be obtained by:
86
+
87
+ $$
88
+ q \left(X _ {n} \mid X _ {n - 1}\right) = \mathcal {N} \left(X _ {n}; \sqrt {1 - \beta_ {n}} X _ {n - 1}, \beta_ {n} I\right), \tag {1}
89
+ $$
90
+
91
+ where $\beta_{n}, n \in [1, \dots, N]$ is a noise schedule used to control the step size of the added noise and $I$ is an identity matrix. $\mathcal{N}$ is the Gaussian distribution.
92
+
93
+ In the reverse process, the standard Gaussian representation $X_{t}$ progressively approximates the
94
+
95
+ ![](images/45905fcd94cb36b641793e83993cb3d8e4f7ccfaf45424231852dd07835c2250.jpg)
96
+ Figure 2: The overall architecture diagram of DGAR. Following the CL paradigm, each snapshot of the TKGs is treated as a separate task.
97
+
98
+ true representation $X_0$ by iterative denoising. It can be learned by a parameterized model:
99
+
100
+ $$
101
+ p _ {\phi} \left(\mathrm {X} _ {n - 1} \mid X _ {n}, n\right) = \mathcal {N} \left(X _ {n - 1}; \mu_ {\phi} \left(\mathrm {X} _ {n}, n\right), \Sigma_ {\phi} \left(\mathrm {X} _ {n}, n\right)\right), \tag {2}
102
+ $$
103
+
104
+ where $\mu_{\phi}$ and $\Sigma_{\phi}$ are generally implemented by a deep neural $f_{\phi}(\cdot)$ , such as Transformer or U-Net. Inspired by the success of Transformer encoders in the field of graph data(Hu et al., 2020), we opt for the Transformer architecture in this work. The pretraining objectives are defined as follows:
105
+
106
+ $$
107
+ \mathcal {L} = \mathbb {E} _ {q} \left[ \sum_ {n = 2} ^ {N} \| X _ {0} - f _ {\phi} (X _ {n}, n) \| ^ {2} \right] - \log p _ {\phi} (x \mid X _ {0}), \tag {3}
108
+ $$
109
+
110
+ where $\mathbb{E}_q$ is the expectation over joint distribution. It is important to emphasize that the data employed in the testing phase remains entirely unseen during the pretraining process.
111
+
112
+ # 4 The DGAR Method
113
+
114
+ The overall architecture of DGAR is shown in Figure 2, primarily consisting of three parts: Historical Context Prompt Building, Diffusion-Enhanced Historical Distribution Generation, and Deep Adaptive Replay.
115
+
116
+ Initially, when a new query arrives for the $t$ -th task, DGAR builds HCPs based on the queried entity (Section 4.1). Based on the obtained HCPs, we adopt the latest TKGR model parameters $\theta_{t-1}$ to guide the historical distribution representation generation of entities (Section 4.2). To support the following reasoning, DAR injects generated historical entity distribution into the current entity distribution representation (Section 4.3). Finally, we present the final loss function of DGAR (Section 4.4).
117
+
118
+ # 4.1 Historical Context Prompt Building
119
+
120
+ Historical context prompts, serving as sampling units of replay data, aim to accurately preserve entities' complete historical semantics. Before constructing an HCP, it is critical to determine which entities at time $t$ are most relevant to achieving the ultimate goal of mitigating catastrophic forgetting. As a new query $(e_q, r_q, ?, t)$ arrives, $e_q$ is an involved entity and its semantics will be directly influenced after fine-tuning at the current timestamp $t$ (Zhang et al., 2023a).
121
+
122
+ To correctly reflect the historical context semantics of $e_{q}$ , we construct an HCP for entity $e_{q}$ . If $e_{q}$ appears at time $t - 1$ or earlier, it might be associated with one or more entities in the past. The historical distribution of $e_{q}$ is determined by the entities historically associated with $e_{q}$ (Xing et al., 2024). The HCP building for $e_{q}$ can be formalized as follows:
123
+
124
+ $$
125
+ P r o m p t _ {\text {r e p l a y}} ^ {i} = \{(s, r, e _ {q}) \mid (s, r, e _ {q}) \in G _ {i} \tag {4}
126
+ $$
127
+
128
+ $$
129
+ \left. \operatorname {o r} \left(e _ {q}, r ^ {- 1}, s\right) \in G _ {i}, G _ {i} \in \mathcal {G} \right\},
130
+ $$
131
+
132
+ where $Prompt_{\mathrm{replay}}^i$ is the HCP of entity $e_q$ at time $i$ , which denotes the set of triples associated with $e_q$ at historical moment $i$ . The triples in $Prompt_{\mathrm{replay}}^i$ consist of the entity $e_q$ , the neighbor $s$ associated with $e_q$ at time $i$ , and the relation $r$ between $e_q$ and $s$ at time $i$ . When no triple containing $e_q$ appears at time $i$ , $Prompt_{\mathrm{replay}}^i$ is empty.
133
+
134
+ To reduce the computational and storage burden, we do not select the HCP of $e_{q}$ across its entire history. Instead, we treat a HCP as the sampling unit, and randomly select HCPs of $k$ distinct time to enhance the generalizability of the replay data. The discussion about $k$ is provided in Appendix B.4. The set of HCP after sampling is denoted
135
+
136
+ as $Prompt_{\mathrm{replay}}$ , which serves as the prompt for generating entity historical distribution in Section 4.2. The entities involved in $Prompt_{\mathrm{replay}}$ are represented as a set $V_{\mathrm{replay}}$ , these entities are directly or indirectly influenced by newly arrived data:
137
+
138
+ $$
139
+ V _ {\text {r e p l a y}} = \left\{e \mid \forall (e, r, o) \in P r o m p t _ {\text {r e p l a y}} \right\} \tag {5}
140
+ $$
141
+
142
+ $$
143
+ \cup \left\{e \mid \forall (s, r, e) \in P r o m p t _ {\text {r e p l a y}} \right\},
144
+ $$
145
+
146
+ # 4.2 Diffusion-enhanced Historical Distribution Generation
147
+
148
+ The target of Diff-HDG strategy is to generate the historical distribution of entity with minimal conflicts against the current distribution of entity. For this purpose, during the generation process, common features between the historical and current distributions need to be enhanced. In addition, features in the historical distribution of entity that differ from the current distribution of entity need to be weakened. Motivated by previous work (Yang et al., 2023; Voynov et al., 2023; Zhang et al., 2023b), pre-trained DMs have demonstrated exceptional capabilities in reproducing knowledge from prompt texts. DMs possess a robust ability to generate generalized expressions. This capability is crucial for resolving conflicts that arise between different distributions. Thus, we generate historical distribution representations of entities through a pre-trained DM based on HCP. The generation of entity historical distribution primarily relies on the inverse diffusion process, which can be outlined as follows:
149
+
150
+ $$
151
+ H _ {e} ^ {\text {r e p l a y}} = p _ {\phi} \left(X _ {n}, f _ {\theta_ {t}}, \text {P r o m p t} _ {\text {r e p l a y}}\right), \tag {6}
152
+ $$
153
+
154
+ where $X_{n}$ denotes the object to denoising, which is processed iteratively to yield the historical distribution of entity $H_{e}^{\mathrm{repl}}$ . The function $f_{\theta_t}$ represents the parameters of the TKGR model at the current time $t$ .
155
+
156
+ In detail, for a fact $(s,r,e_q)\in Prompt_{replay}$ , we treat the entity $s$ and the relation $r$ as generation conditions. This condition-based generation method integrates information from historical neighbors and relations, enabling a more precise modeling of the historical distribution of entity:
157
+
158
+ $$
159
+ X _ {n} = \operatorname {C o n d i t i o n} \left(S _ {0}, R _ {0}, Z\right), Z \sim \mathcal {N} (\mathbf {0}, I), \tag {7}
160
+ $$
161
+
162
+ where $S_0 = \text{Embedding}(s)$ , $R_0 = \text{Embedding}(r)$ , and Condition $(\cdot)$ represents concatenation. The condition in $X_n$ can guide the DM to generate distributions that reflect the historical semantics of entities.
163
+
164
+ To enhance the common features between historical and current distribution representations, we propose a novel method to guide the generation process of historical entity representations:
165
+
166
+ $$
167
+ X _ {n - 1} = p _ {\phi} \left(X _ {n}\right), \tag {8}
168
+ $$
169
+
170
+ $$
171
+ X _ {n - 1} = X _ {n - 1} + \gamma \frac {\partial \sigma}{\partial X _ {n - 1}}, \tag {9}
172
+ $$
173
+
174
+ $$
175
+ \frac {\partial \sigma}{\partial X _ {n - 1}} = \nabla_ {X _ {n - 1}} \sigma \left(f _ {\theta_ {t}} \left(X _ {n - 1}, (s, r, e _ {q})\right)\right), \tag {10}
176
+ $$
177
+
178
+ where $\sigma$ denotes the softmax function, $\gamma$ is a hyperparameter, and $X_{n - 1}$ is the result of the first denoising step performed on $X_{n}$ . This process produces a cleaner representation of $X_{n}$ . After acquiring $X_{n - 1}$ , we evaluate the scores of historical facts in Prompt_replay with the current TKGR model $f_{\theta_t}$ . The gradient of scores is applied to optimize the generated historical distribution, ensuring that the scores of these historical facts are maximized at the current time. Based on our empirical observations, adjacent timestamps in TKGs show only minor distribution differences. Since the model parameters $\theta_t$ for the current time can only be obtained after being updated at the current time, we approximate $\theta_t$ using $\theta_{t - 1}$ .
179
+
180
+ After $n$ iterations of denoising with $p_{\phi}$ , we obtain the generated representation $X_0^{e_q}$ for the query entity. Similarly, the historical neighboring entity $s$ receives an updated representation $X_0^s$ , influenced by the query entity $X_0^{e_q}$ . Mean pooling is used to aggregate information from multiple neighbors across different timestamps, as shown below:
181
+
182
+ $$
183
+ H _ {e} ^ {\text {r e p l a y}} = \frac {\sum_ {i = 1} ^ {k} \sum_ {\epsilon \in M _ {e} ^ {i}} H _ {\epsilon} ^ {i}}{\sum_ {i = 1} ^ {k} | M _ {e} ^ {i} |}, \tag {11}
184
+ $$
185
+
186
+ where $H_{e}^{\mathrm{replay}}$ represents the final historical distribution representation of entity $e \in V_{\mathrm{replay}}$ , capturing its historical characteristics in the TKGs. $H_{\epsilon}^{i}$ represents the entity representation $X_0^e$ generated from the facts $\epsilon$ . $M_{e}^{i}$ refers to the set of facts that contain entity $e$ at the $i$ -th time slice. After iterative denoising, the features in $H_{e}^{\mathrm{replay}}$ that are the same as the current distribution are enhanced, and the features in $H_{e}^{\mathrm{replay}}$ that are different from the current distribution are weakened. These historical entity representations are generated in parallel to improve computational efficiency.
187
+
188
+ # 4.3 Deep Adaptive Replay
189
+
190
+ In this section, we introduce a DAR mechanism that effectively integrates the historical and current
191
+
192
+ distributions of entities. Building upon the historical distribution representation of entities obtained in section 4.2, these representations are incorporated into the current distribution representation of entities.
193
+
194
+ We identify that overly complex historical knowledge injection mechanisms impose a considerable learning burden, whereas excessively simplistic approaches result in significant knowledge loss. To overcome these issues, we propose DAR for historical knowledge replay, which performs the following operations at each layer of the KG snapshot sequence reasoning model:
195
+
196
+ $$
197
+ H _ {e} ^ {l} = \left\{ \begin{array}{c} H _ {e} ^ {\text {c u r r e n t}, l}, \quad e \notin V _ {\text {r e p l a y}} \\ \alpha H _ {e} ^ {\text {r e p l a y}} + (1 - \alpha) H _ {e} ^ {\text {c u r r e n t}, l}, e \in V _ {\text {r e p l a y}} \end{array} , \right. \tag {12}
198
+ $$
199
+
200
+ where $\alpha \in [0,1]$ , which adaptively balances new and old knowledge. $H_{e}^{\mathrm{current},l}$ denotes the entity distribution representation of the current task at layer $l$ . To preserve the evolutionary characteristics in the temporal sequence, deep replay is conducted within the $L$ evolution units; the final entity representation, denoted as $H_{\mathrm{final}}$ , is obtained.
201
+
202
+ # 4.4 Model Training
203
+
204
+ After obtaining the final representation, the decoder computes scores for candidate entities. We treat entity prediction as multi-class classification, and model parameters $\theta_{t}$ for task $t$ are optimized as follows:
205
+
206
+ $$
207
+ \mathcal {L} _ {t, c} = - \sum_ {(s, r, o, t) \in D _ {\text {t r a i n}} ^ {t}} y _ {t} ^ {e} f _ {\theta_ {t}} (s, r, o, t), \tag {13}
208
+ $$
209
+
210
+ where $y_{t}^{e}$ represents the label vector. During experiments, we observe that although we attempt to preserve historical knowledge by enhancing common features between the representations of current and historical distributions, the model still suffers from historical information loss. This is primarily due to the constraints of the guidance function and subsequent optimization for current data. To address this issue, we incorporate facts from the historical context prompt as a regularization term into the loss function. The final loss calculation is formulated as follows:
211
+
212
+ $$
213
+ \mathcal {L} _ {t} = \mathcal {L} _ {t, c} + \mu \mathcal {L} _ {t, r}, \tag {14}
214
+ $$
215
+
216
+ where $\mathcal{L}_{t,c}$ represents the training loss for the current task, $\mathcal{L}_{t,r}$ denotes the loss associated with replaying historical facts, and $\mu$ is a hyperparameter, typically set to 1. The computation of $L_{t,r}$ is
217
+
218
+ similar to that of $L_{t,c}$ . The difference is that $L_{t,c}$ calculates the loss based on current facts, while $L_{t,r}$ uses historical fact in Promptreplay.
219
+
220
+ # 5 Experiments
221
+
222
+ # 5.1 Experimental Setup
223
+
224
+ Datasets. We adopt four widely used benchmark datasets for TKGR tasks: ICE14, ICE18, ICE05-15, and GDELT. The first three datasets originate from the Integrated Crisis Early Warning System (Jin et al., 2020), which records geopolitical events. The statistical details of these datasets are summarized in Table B.1.
225
+
226
+ Metrics. We utilize two evaluation metrics: Mean Reciprocal Rank (MRR) and Hits@k $(\mathrm{k} = 1,10)$ , both of which are widely adopted to assess the performance of TKGR methods. Following the approach (Mirtaheri et al., 2023), we evaluate the model's ability to mitigate catastrophic forgetting. We evaluate the model trained on the final task $t$ by testing its performance on the current test set (Current) and calculating its average performance across all previous test sets (Average).
227
+
228
+ Baselines. We adopt the following baselines: FT, ER (Rolnick et al., 2019), TIE (Wu et al., 2021), LKG (Cui et al., 2023) and IncDE (Liu et al., 2024a). Details about these baselines are provided in Appendix B.2. In the experiments, we use RE-GCN as the base model.
229
+
230
+ # 5.2 Main Results
231
+
232
+ The results of main experiments are shown in Table 1 and Table 2. Each dataset is tested five times and the average results are reported. The same procedure is also followed in subsequent experiments.
233
+
234
+ DGAR achieves consistent performance improvements compared to Fine-tuning. For historical tasks, it achieves an average increase of $11.34\%$ in MRR. This demonstrates that, compared to direct fine-tuning, our approach more effectively retains historical knowledge.
235
+
236
+ Moreover, DGAR consistently outperforms all baselines. Compared to the strongest baseline, it achieves an average MRR improvement of $4.01\%$ in the current task across all evaluated datasets. For historical tasks, the average improvement is $8.23\%$ in MRR, and $9.79\%$ in Hits@10. DGAR demonstrates improvements across various datasets by preserving historical knowledge.
237
+
238
+ In contrast, while the TIE model performs well on current tasks, it exhibits poor average performance across all historical tasks. This result is
239
+
240
+ <table><tr><td colspan="5">ICE14</td><td colspan="4">ICE18</td><td colspan="4">ICE05-15</td></tr><tr><td rowspan="2">Algo.</td><td colspan="2">Current</td><td colspan="2">Average</td><td colspan="2">Current</td><td colspan="2">Average</td><td colspan="2">Current</td><td colspan="2">Average</td></tr><tr><td>MRR</td><td>MRR</td><td>Hits@1</td><td>Hits@10</td><td>MRR</td><td>MRR</td><td>Hits@1</td><td>Hits@10</td><td>MRR</td><td>MRR</td><td>Hits@1</td><td>Hits@10</td></tr><tr><td>FT</td><td>42.66</td><td>37.46</td><td>26.95</td><td>58.20</td><td>30.76</td><td>25.35</td><td>15.97</td><td>44.36</td><td>43.80</td><td>41.88</td><td>30.55</td><td>63.82</td></tr><tr><td>ER</td><td>48.75</td><td>42.14</td><td>31.03</td><td>63.80</td><td>30.39</td><td>27.20</td><td>16.88</td><td>48.19</td><td>52.50</td><td>45.55</td><td>33.34</td><td>69.07</td></tr><tr><td>TIE</td><td>53.74</td><td>41.07</td><td>30.28</td><td>62.39</td><td>34.45</td><td>28.73</td><td>18.40</td><td>49.60</td><td>60.77</td><td>42.56</td><td>30.90</td><td>64.67</td></tr><tr><td>LKGE</td><td>43.56</td><td>37.51</td><td>27.13</td><td>58.51</td><td>31.12</td><td>25.56</td><td>16.12</td><td>44.70</td><td>43.28</td><td>42.46</td><td>30.99</td><td>64.51</td></tr><tr><td>IncDE</td><td>45.03</td><td>36.57</td><td>26.20</td><td>56.95</td><td>31.83</td><td>25.52</td><td>16.07</td><td>44.74</td><td>46.33</td><td>40.56</td><td>29.34</td><td>62.17</td></tr><tr><td>DGAR</td><td>58.59</td><td>50.12</td><td>39.36</td><td>70.48</td><td>36.53</td><td>33.00</td><td>21.74</td><td>55.63</td><td>66.01</td><td>54.33</td><td>43.11</td><td>75.13</td></tr></table>
241
+
242
+ Table 1: The main experimental results on the ICE14, ICE18, and ICE05-15 datasets are presented. Bolded scores indicate the best results.
243
+
244
+ <table><tr><td colspan="5">GDELT</td></tr><tr><td rowspan="2">Algo.</td><td colspan="2">Current</td><td colspan="2">Average</td></tr><tr><td>MRR</td><td>MRR</td><td>Hits@1</td><td>Hits@10</td></tr><tr><td>FT</td><td>14.74</td><td>15.60</td><td>8.73</td><td>29.05</td></tr><tr><td>ER</td><td>15.42</td><td>16.21</td><td>8.97</td><td>30.42</td></tr><tr><td>TIE</td><td>15.56</td><td>16.40</td><td>8.94</td><td>30.98</td></tr><tr><td>LKGE</td><td>14.43</td><td>15.52</td><td>8.69</td><td>28.90</td></tr><tr><td>IncDE</td><td>15.14</td><td>15.49</td><td>8.64</td><td>28.86</td></tr><tr><td>DGAR</td><td>23.25</td><td>28.30</td><td>17.38</td><td>51.39</td></tr></table>
245
+
246
+ Table 2: The main experimental results on the GDELT.
247
+
248
+ ![](images/660084bbade86b290ce511e92122fb6d386b1bc4b0e0c72a4269c99316127c5c.jpg)
249
+ (a) Current
250
+
251
+ ![](images/41f7e87a8d15fe7d2bf6d3f8c9d75ebe447ca622f5a5aa142fc6b4dcf51c6e8b.jpg)
252
+ (b) Average
253
+ Figure 3: Performance of different base TKGR models.
254
+
255
+ attributed to the strict regularization of TIE, which limits its ability to retain historical knowledge. IncDE and LKG E only use the embedding constraint model of entities and relations at the previous moment to retain old knowledge, which leads to the decline of IncDE's performance on historical tasks compared to FT. LKG E additionally considers the constraints of cumulative weights, so it has a slight improvement on certain datasets compared to FT.
256
+
257
+ # 5.3 Ablation Study
258
+
259
+ In this section, we examine the impact of various components of the model on the final result, as shown in Table 3. To thoroughly evaluate their roles, we implement the following model variants: (1) w/o HP, where the HCP is replaced with ER; (2) w/o GR, where the variant discards DAR and Diff-HDG, relying only on the facts within the HCP for regularization; (3) w/o AR, where the historical and
260
+
261
+ current entity distributions are merged through direct addition instead of DAR as specified in Eq. 15; and (4) w/o Guider, where the operation of enhancing common features across different distributions in Diff-HDG is discarded; (5) w/o $L_{r}$ , where the loss $\mathcal{L}_{t,r}$ in Section 4.4 is removed during training. The analysis of w/o $L_{r}$ in Appendix B.6.
262
+
263
+ Effect Analysis of Historical Context Prompt. In the w/o HP variant, the model's performance noticeably declined, demonstrating that HCP effectively ensures the semantic integrity of the historical information. It prevents catastrophic forgetting during CL and enhances predictions for the current task. In contrast, ER merely replays partial historical information.
264
+
265
+ Effect Analysis of Diffusion-enhanced Historical Distribution Generation. In the w/o Guider variant, different datasets show varying degrees of performance drop. This demonstrates that incorporating the guider aids in capturing common features between historical and current distributions, thereby mitigating performance losses caused by distribution conflicts. The smaller drop observed on the GDELT dataset likely results from its shorter temporal gaps and less pronounced distribution shifts compared to other datasets.
266
+
267
+ Effect Analysis of Deep Adaptive Replay. Removing the adaptive parameter $\alpha$ in the w/o AR variant causes varying levels of performance decrease across datasets, demonstrating the effectiveness of our adaptive fusion method in balancing historical and current distribution representations. The limited decrease observed on the GDELT dataset can be attributed to the minimal difference between the current distribution and that of GDELT, which restricts the adaptive parameter $\alpha$ in its capacity to adjust effectively.
268
+
269
+ Combined Effect Analysis of DAR and Diff-HDG. DAR and Diff-HDG are proposed to resolve the conflict between historical and current
270
+
271
+ <table><tr><td colspan="4">ICE14</td><td colspan="3">ICE18</td><td colspan="3">ICE05-15</td><td colspan="3">GDELT</td></tr><tr><td rowspan="2"></td><td>Current</td><td colspan="2">Average</td><td>Current</td><td colspan="2">Average</td><td>Current</td><td colspan="2">Average</td><td>Current</td><td colspan="2">Average</td></tr><tr><td>MRR</td><td>MRR</td><td>Hits@10</td><td>MRR</td><td>MRR</td><td>Hits@10</td><td>MRR</td><td>MRR</td><td>Hits@10</td><td>MRR</td><td>MRR</td><td>Hits@10</td></tr><tr><td>w/o HP</td><td>53.43</td><td>46.74</td><td>67.58</td><td>31.67</td><td>25.89</td><td>45.31</td><td>53.53</td><td>45.71</td><td>68.18</td><td>15.49</td><td>17.18</td><td>32.73</td></tr><tr><td>w/o GR</td><td>48.18</td><td>39.15</td><td>59.71</td><td>30.32</td><td>27.62</td><td>47.74</td><td>52.97</td><td>38.71</td><td>59.19</td><td>16.24</td><td>16.67</td><td>31.65</td></tr><tr><td>w/o AR</td><td>49.27</td><td>45.55</td><td>65.27</td><td>32.28</td><td>29.79</td><td>50.90</td><td>58.48</td><td>51.93</td><td>72.57</td><td>22.04</td><td>26.98</td><td>49.76</td></tr><tr><td>w/o Guider</td><td>55.23</td><td>49.32</td><td>70.29</td><td>35.16</td><td>32.25</td><td>54.67</td><td>58.70</td><td>52.53</td><td>72.85</td><td>22.18</td><td>27.99</td><td>50.93</td></tr><tr><td>w/o Lr</td><td>52.17</td><td>44.43</td><td>64.13</td><td>36.20</td><td>30.62</td><td>52.98</td><td>58.64</td><td>51.98</td><td>72.01</td><td>22.84</td><td>25.95</td><td>48.75</td></tr><tr><td>Ours</td><td>58.59</td><td>50.12</td><td>70.48</td><td>36.53</td><td>33.00</td><td>55.63</td><td>66.01</td><td>54.33</td><td>75.13</td><td>23.25</td><td>28.30</td><td>51.39</td></tr></table>
272
+
273
+ Table 3: Ablation experimental results on all datasets.
274
+
275
+ distribution. The w/o GR variants show a clear performance drop, likely due to memory contention caused by distribution conflicts arising from the simple replay of historical data. This suggests that Dif-HDG and DAR alleviate distribution conflicts, thereby preserving historical knowledge more efficiently.
276
+
277
+ Effect Analysis of Different Base Models. Although we choose the most typical model, RE-GCN, as our base model, our method can still be extended to other GNN-based TKGR models. To verify the scalability of our method, we extend DGAR to TiRGN (Li et al., 2022b) and conduct experiments on four benchmark datasets. The experimental results of MRR in Figure 3 indicate that DGAR consistently outperforms direct fine-tuning on both current tasks and historical tasks, demonstrating that DGAR has robust scalability and effectiveness for GNN-based TKGR models.
278
+
279
+ # 5.4 Effect of Memorizing in CL
280
+
281
+ To further validate DGAR's ability to retain historical knowledge during forward learning, we evaluate the mean difference between $p_{n,i}$ and $p_{i,i}$ ( $1 < i \leq n$ ) in Figure 4. Here, $p_{i,j}$ represents the MRR score of the $j$ -th task after training the model on the $i$ -th task. A higher mean value indicates better retention of prior knowledge during CL. When the value exceeds zero, it indicates reverse transfer of newly learned knowledge, whereas a value below zero reflects the loss of prior knowledge during CL (Lin et al., 2022). Experiments show that DGAR outperforms the best baseline, confirming its effectiveness in mitigating catastrophic forgetting. Since the data in TKGs are highly correlated, we find that when the number of tasks is small, a reasonable strategy can help prevent catastrophic forgetting and facilitate the reverse transfer of new knowledge. This is evident in the performance of DGAR and ER on ICE14.
282
+
283
+ ![](images/767927ede10b533391c7357eb581fdda7c244b07d65bfdfed0a0068aee285080.jpg)
284
+ (a) ICE14 and ICE18
285
+
286
+ ![](images/bb603fe62f750fb023b42d786c7f6492e3c84b761c6574fee9ac4639da8fc2db.jpg)
287
+ (b) ICE05-15 and GDELT
288
+ Figure 4: Effect of memorizing old knowledge in CL.
289
+
290
+ # 5.5 Case Study
291
+
292
+ We conduct a case study on ICE14 and ICE18 datasets to assess whether DGAR can handle conflicts in entity distributions and retain old knowledge effectively in Figure 5. At a randomly chosen time $i$ , we extract all entities from the facts, save their feature distributions at time $i$ (Stage 1), time $j$ (Stage 2), time $m$ (Stage 3) and at a later time $n$ ( $n > m > j > i$ ) (Stage 4), and analyzed them using U-MAP.
293
+
294
+ Figures 5(a) and 5(b) compare the entity distribution representations of the FT model and DGAR on ICE14 across four stages. The entity distribution learned by the FT model at the same time is more clustered, while the entity distributions at different times are more distinct, showing a clearer difference. In contrast, DGAR learns a more general and consistent distribution, allowing it to share the feature space between tasks more effectively, thereby enhancing knowledge retention and reducing forgetting. A similar pattern is observed in Figures 5(c) and 5(d) on ICE18, further supporting these findings.
295
+
296
+ # 6 Conclusion
297
+
298
+ This paper introduces a deep generative adaptive replay method to mitigate catastrophic forgetting in TKGR models during CL. A historical context prompt integrating contextual information is designed to generate historical distribution representations of entities via a pre-trained DM. The generation process is guided by current model parameters
299
+
300
+ ![](images/c99d33723f4df555bfdb5a4bd49dbd2db288033a5296e4ddc34626ad3cfb24a1.jpg)
301
+ Figure 5: Visualization case study of entity distribution.
302
+
303
+ to reinforce common features, minimizing conflicts between historical and current entity distributions. In addition, a deep adaptive replay strategy derives entity distribution representations with historical knowledge. These combined techniques enable the proposed method to achieve outstanding performance across various datasets.
304
+
305
+ # 7 Limitations
306
+
307
+ In this section, we examine the limitations of our approach. DGAR is designed to retain previously acquired knowledge through CL, facilitating TKGR. Although DGAR is more time-efficient than retraining and surpasses other models in mitigating catastrophic forgetting, it still faces several challenges.
308
+
309
+ Firstly, the model addresses newly emerging entities and relations using Xavier initialization without further analysis or dedicated modeling. Such a simplistic approach may constrain the model's ability to learn new knowledge effectively, particularly when complex interrelations exist between new and previously learned knowledge. This highlights the need for more sophisticated strategies to handle new entities and relations in CL scenarios.
310
+
311
+ Secondly, while DGAR demonstrates strong performance in reducing catastrophic forgetting, it introduces additional learnable parameters. These parameters enhance adaptability to new knowledge but also pose a potential risk of forgetting previously learned information. This risk arises since the increased number of parameters may lead the model to prioritize new knowledge, thereby compromising the retention of older knowledge. Furthermore, the inclusion of additional parameters inherently increases model complexity, making the
312
+
313
+ training and reasoning process more cumbersome. Such complexity necessitates careful consideration during design to strike a balance between knowledge retention and model complexity.
314
+
315
+ # 8 Ethics Statement
316
+
317
+ Firstly, this study fully complies with the ethical guidelines in the ACL Code of Ethics. Secondly, all datasets involved in this study are from previous studies. The datasets we used do not contain individual privacy data. Finally, DGAR focuses on the research and experiments of TKGR tasks. Like other TKGR methods, the results of our method reasoning may be toxic or erroneous, so manual inspection of the results may be required in the applications.
318
+
319
+ # References
320
+
321
+ Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne Van Den Berg. 2021. Structured denoising diffusion models in discrete state-spaces. Advances in Neural Information Processing Systems, 34:17981-17993.
322
+ Yuxiang Cai, Qiao Liu, Yanglei Gan, Changlin Li, Xueyi Liu, Run Lin, Da Luo, and JiayeYang JiayeYang. 2024. Predicting the unpredictable: Uncertainty-aware reasoning over temporal knowledge graphs via diffusion process. In Findings of the Association for Computational Linguistics ACL 2024, pages 5766-5778.
323
+ Kai Chen, Ye Wang, Yitong Li, Aiping Li, Han Yu, and Xin Song. 2024a. A unified temporal knowledge graph reasoning model towards interpolation and extrapolation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 117-132, Bangkok, Thailand. Association for Computational Linguistics.
324
+ Wei Chen, Huaiyu Wan, Yuting Wu, Shuyuan Zhao, Jiayaqi Cheng, Yuxin Li, and Youfang Lin. 2024b. Local-global history-aware contrastive learning for temporal knowledge graph reasoning. In 2024 IEEE 40th International Conference on Data Engineering (ICDE), pages 733-746.
325
+ Wei Chen, Yuting Wu, Shuhan Wu, Zhiyu Zhang, Mengqi Liao, Youfang Lin, and Huaiyu Wan. 2024c. Cogntke: A cognitive temporal knowledge extrapolation framework. ArXiv, abs/2412.16557.
326
+ Yubo Chen, Shaoru Guo, Kang Liu, and Jun Zhao. 2023. (large language models and knowledge graphs). In Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum), pages 67-76.
327
+
328
+ Yuanning Cui, Yuxin Wang, Zequn Sun, Wenqiang Liu, Yiqiao Jiang, Kexin Han, and Wei Hu. 2023. Life-long embedding learning and transfer for growing knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 4217-4224.
329
+ Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018. Hyte: Hyperplane-based temporally aware knowledge graph embedding. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 2001-2011.
330
+ Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, and Pascal Poupart. 2020. Diachronic embedding for temporal knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 3988-3995.
331
+ Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and LingPeng Kong. 2022. Diffuseq: Sequence to sequence text generation with diffusion models. arXiv preprint arXiv:2210.08933.
332
+ Saiping Guan, Xueqi Cheng, Long Bai, Fujun Zhang, Zixuan Li, Yutao Zeng, Xiaolong Jin, and Jiafeng Guo. 2022. What is event knowledge graph: A survey. IEEE Transactions on Knowledge and Data Engineering, 35(7):7569-7589.
333
+ Bernal Jiménez Gutierrez, Yiheng Shu, Yu Gu, Michihiro Yasunaga, and Yu Su. 2024. Hipporag: Neurobiologically inspired long-term memory for large language models. arXiv preprint arXiv:2405.14831.
334
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851.
335
+ Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. 2020. Heterogeneous graph transformer. In Proceedings of the web conference 2020, pages 2704-2710.
336
+ Rikui Huang, Wei Wei, Xiaoye Qu, Shengzhe Zhang, Dangyang Chen, and Yu Cheng. 2024. Confidence is not timeless: Modeling temporal validity for rule-based temporal knowledge graph forecasting. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10783-10794, Bangkok, Thailand. Association for Computational Linguistics.
337
+ Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. 2020. Recurrent event network: Autoregressive structure inference over temporal knowledge graphs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6669-6683, Online. Association for Computational Linguistics.
338
+ Timothee Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. In 8th International
339
+
340
+ Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
341
+ Julien Leblay and Melisachew Wudage Chekol. 2018. Deriving validity time in knowledge graph. In _Companion proceedings of the web conference 2018_, pages 1771-1776.
342
+ Eunhae Lee. 2024. The impact of model size on catastrophic forgetting in online continual learning. Preprint, arXiv:2407.00176.
343
+ Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S Liang, and Tatsunori B Hashimoto. 2022a. Diffusionim improves controllable text generation. Advances in Neural Information Processing Systems, 35:4328-4343.
344
+ Yujia Li, Shiliang Sun, and Jing Zhao. 2022b. Tirgn: Time-guided recurrent graph network with local-global historical patterns for temporal knowledge graph reasoning. In *IJCAI*, pages 2152-2158.
345
+ Yujia Li, Shiliang Sun, and Jing Zhao. 2022c. Tirgn: Time-guided recurrent graph network with local-global historical patterns for temporal knowledge graph reasoning. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 2152-2158.
346
+ Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang, and Xueqi Cheng. 2021. Temporal knowledge graph reasoning based on evolutionary representation learning.
347
+ Sen Lin, Li Yang, Deliang Fan, and Junshan Zhang. 2022. Beyond not-forgetting: Continual learning with backward knowledge transfer. Advances in Neural Information Processing Systems, 35:16165-16177.
348
+ Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu Shang, Jinhua Gao, Guozheng Li, Ke Ji, and Yanhe Liu. 2024a. Towards continual knowledge graph embedding via incremental distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 8759-8768.
349
+ Jiajun Liu, Wenjun Ke, Peng Wang, Jiahao Wang, Jinhua Gao, Ziyu Shang, Guozheng Li, Zijie Xu, Ke Ji, and Yining Li. 2024b. Fast and continual knowledge graph embedding via incremental lora. arXiv preprint arXiv:2407.05705.
350
+ Xiao Long, Liansheng Zhuang, Aodi Li, Houqiang Li, and Shafei Wang. 2024. Fact embedding through diffusion model for knowledge graph completion. In Proceedings of the ACM on Web Conference 2024, pages 2020-2029.
351
+ Mehrnoosh Mirtaheri, Mohammad Rostami, and Aram Galstyan. 2023. History repeats: Overcoming catastrophic forgetting for event-centric temporal knowledge graph completion. arXiv preprint arXiv:2305.18675.
352
+
353
+ David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. 2019. Experience replay for continual learning. Advances in neural information processing systems, 32.
354
+
355
+ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695.
356
+
357
+ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256-2265. PMLR.
358
+
359
+ Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. 2023. Sketch-guided text-to-image diffusion models. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11.
360
+
361
+ Jiapu Wang, Kai Sun, Linhao Luo, Wei Wei, Yongli Hu, Alan Wee-Chung Liew, Shirui Pan, and Baocai Yin. 2024. Large language models-guided dynamic adaptation for temporal knowledge graph reasoning. arXiv preprint arXiv:2405.14170.
362
+
363
+ Jiapeng Wu, Yishi Xu, Yingxue Zhang, Chen Ma, Mark Coates, and Jackie Chi Kit Cheung. 2021. Tie: A framework for embedding-based incremental temporal knowledge graph completion. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, pages 428-437.
364
+
365
+ Shuhan Wu, Huaiyu Wan, Wei Chen, Yuting Wu, Junfeng Shen, and Youfang Lin. 2023. Towards enhancing relational rules for knowledge graph link prediction. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 10082-10097, Singapore. Association for Computational Linguistics.
366
+
367
+ Yujie Xing, Xiao Wang, Yibo Li, Hai Huang, and Chuan Shi. 2024. Less is more: on the overglobalizing problem in graph transformers. arXiv preprint arXiv:2405.01102.
368
+
369
+ Yi Xu, Junjie Ou, Hui Xu, and Luoyi Fu. 2023a. Temporal knowledge graph reasoning with historical contrastive learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 4765-4773.
370
+
371
+ Yi Xu, Junjie Ou, Hui Xu, and Luoyi Fu. 2023b. Temporal knowledge graph reasoning with historical contrastive learning. In AAAI.
372
+
373
+ Zhengyuan Yang, Jianfeng Wang, Zhe Gan, Linjie Li, Kevin Lin, Chenfei Wu, Nan Duan, Zicheng Liu, Ce Liu, Michael Zeng, et al. 2023. Reco: Region-controlled text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14246-14255.
374
+
375
+ Jiasheng Zhang, Jie Shao, and Bin Cui. 2023a. Streame: Learning to update representations for temporal knowledge graphs in streaming scenarios. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 622-631.
376
+
377
+ Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. 2023b. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847.
378
+
379
+ Shuyuan Zhao, Wei Chen, Boyan Shi, Liyong Zhou, Shuohao Lin, and Huaiyu Wan. 2025. Spatial-temporal knowledge distillation for takeaway recommendation. In AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA, pages 13365-13373. AAAI Press.
380
+
381
+ # A Details about DGAR
382
+
383
+ # A.1 Details about Deep Adaptive Replay
384
+
385
+ While initially exploring more complex mechanisms for this integration, it is observed that the additional parameters introduced hinders the model's convergence due to increased fitting complexity. As a result, a direct injection approach is adopted to integrate the historical distributions into the current representations, as detailed below:
386
+
387
+ $$
388
+ H _ {\text {f i n a l}} = \left\{ \begin{array}{c} H _ {e} ^ {\text {c u r r e n t}}, \quad e \notin V _ {\text {r e p l a y}} \\ H _ {e} ^ {\text {r e p l a y}} + H _ {e} ^ {\text {c u r r e n t}}, e \in V _ {\text {r e p l a y}} \end{array} . \right. \tag {15}
389
+ $$
390
+
391
+ However, it is observed that such a simple and direct fusion approach results in performance degradation. To address this issue, a straightforward parameter is introduced to balance the historical distribution representation $H_{e}^{\mathrm{replay}}$ and the current distribution representation $H_{e}^{\mathrm{current}}$ , thereby generating the final entity representation $H_{\mathrm{final}}$ . This parameter effectively adjusts the weighting of the two distributions, mitigating the performance loss caused by direct fusion while preserving the expressive power of historical knowledge and the dynamic characteristics of the current distribution:
392
+
393
+ $$
394
+ H _ {\text {f i n a l}} = \alpha H _ {e} ^ {\text {r e p l a y}} + (1 - \alpha) H _ {e} ^ {\text {c u r r e n t}}, e \in V _ {\text {r e p l a y}}, \tag {16}
395
+ $$
396
+
397
+ where $\alpha \in [0,1]$ . $H_{\mathrm{final}}$ denotes the final entity representation, combining the most recent and historical information of the entity.
398
+
399
+ To prevent the loss of entity evolution patterns over time caused by direct injection, we integrate distribution representation from the KG snapshot sequence reasoning model's evolution unit to
400
+
401
+ achieve a deeper incorporation of historical distribution representations without introducing additional parameters. After applying the relation-aware GCN in the $l$ -th evolution unit, we obtain the current distribution representation of entity $H_{e}^{\mathrm{current},l}$ . The historical distribution representation of entities is then injected into the current distribution representation of entities as follows:
402
+
403
+ $$
404
+ H _ {e} ^ {\text {c u r r e n t}, l} = \alpha H _ {e} ^ {\text {r e p l a y}} + (1 - \alpha) H _ {e} ^ {\text {c u r r e n t}, l}, e \in V _ {\text {r e p l a y}}. \tag {17}
405
+ $$
406
+
407
+ After passing through multiple evolution units, the final entity representation $H_{\mathrm{final}}$ , which incorporates historical distributions, is obtained.
408
+
409
+ # A.2 Pre-train for DM
410
+
411
+ In this section, we will discuss how we obtain the pre-trained DM. Considering that using a large amount of knowledge to train diffusion will not only increase the risk of data leakage, but also fail to adapt to new data arriving over time, we adopt CL to train DM. For example, at the $t$ -th moment, we can obtain the DM $\phi_{t-1}$ pre-trained at the previous moment, and $\phi_{t-1}$ is used as a pre-trained DM to assist in generating the entity history distribution in Diff-HDG. After completing all the operations in Section 4, $\phi_t$ is initialized with $\phi_{t-1}$ , and the model parameters are updated on $D_{train}^{t}$ . Eq. 16 is employed to preserve the historical knowledge of the entity for DM. Upon completion of the training, a new pre-trained DM, $\phi_t$ , is obtained and will be used in the subsequent learning process.
412
+
413
+ # B Further Analysis
414
+
415
+ B.1 Datasets Details
416
+
417
+ <table><tr><td></td><td>ICE14</td><td>ICE18</td><td>ICE05-15</td><td>GDELT</td></tr><tr><td>Entities</td><td>6869</td><td>23033</td><td>10094</td><td>7,691</td></tr><tr><td>Relations</td><td>230</td><td>256</td><td>251</td><td>240</td></tr><tr><td>Tasks</td><td>365</td><td>304</td><td>4017</td><td>2,751</td></tr><tr><td>Task granularity</td><td>24 hours</td><td>24 hours</td><td>24 hours</td><td>15mins</td></tr><tr><td>Total number of train</td><td>74,845</td><td>373,018</td><td>368,868</td><td>1,734,399</td></tr><tr><td>Total number of valid</td><td>8,514</td><td>45,995</td><td>46,302</td><td>238,765</td></tr><tr><td>Total number of test</td><td>7,371</td><td>49,545</td><td>46,159</td><td>305,241</td></tr></table>
418
+
419
+ Table 4: Details of the TKG datasets.
420
+
421
+ We follow the common division ratio of TKGR tasks: The facts in each task are partitioned into train, valid, and test sets in a ratio of 8:1:1 (Li et al., 2021; Xu et al., 2023a). The statistical details of the datasets are shown in Table 4.
422
+
423
+ # B.2 Baselines Details
424
+
425
+ FT (fine-tuning) is a naive baseline where the model is fine-tuned using newly added facts without any mechanism to alleviate catastrophic forgetting. FT is set up following the previous works such as TIE (Wu et al., 2021), LKGE (Wu et al., 2021), and IncDE (Wu et al., 2021). FT enables the base model (e.g., RE-GCN and TiRGN) to perform CL without applying any additional strategies. Specifically, the base model inherits the parameters from the previous time step $i - 1$ and continual training on the training data $D_{train}^{t}$ at time step $i$ . ER (Rolnick et al., 2019) mitigates forgetting by replaying a subset of previously stored events alongside newly added facts during training. TIE (Wu et al., 2021) incorporates temporal regularization, experience replay with positive facts, and the use of deleted facts as negative examples to effectively address both catastrophic forgetting and intransigence. LKGE (Cui et al., 2023) preserves historical knowledge by leveraging historical weights and embeddings through the L2 paradigm. We incorporate the reconstruction loss and embedding regularization from LKGE into our objective function. When initializing new entities, their embedding transfer strategies are adopted. Based on the method proposed by IncDE (Liu et al., 2024a), we leverage the hierarchical ordering measure of IncDE and incorporate the distillation loss proposed by IncDE into our objective function.
426
+
427
+ # B.3 Implementation Details
428
+
429
+ For all datasets, the embedding size $d$ is set to 200, the learning rate $lr$ is set to 0.001, and the batch size is determined by the number of facts at each time step. The number of layers of the Transformer encoder for all datasets is set to 2. The temperature coefficient $\tau$ for all datasets is set to 0.5. The parameters of DAGR are optimized by using Adam during the training process. The optimal coefficient $\gamma$ in the Diff-HDG is set to 1. The optimal number of layers $L$ in the DAR is set to 3. The optimal loss coefficient $\mu$ in model training is set to 1. The number of samples of the best HCP $k$ for ICE14, ICE18, ICE05-15, and GDELT is set to 35, 25, 40, and 32, respectively. We conduct hyperparameter search experiments on the primary parameters of DGAR using control variables. The number of parameters in ICE14, ICE18, ICE05-15, and GDELT is 17.55 MB, 33.71 MB, 21.44 MB, and 21.38 MB, respectively. All experiments are
430
+
431
+ ![](images/8362f74b527972cd7e876efd06008f9214bdcf14b7a1ad73f7daa9868239dbac.jpg)
432
+ (a) ICE14
433
+
434
+ ![](images/b64a4234e960360f5c95a898ce8e06a7b84fe17b086096016d96f957c67256c1.jpg)
435
+ (b) ICE18
436
+ Figure 6: Sensitivity Analysis.
437
+
438
+ conducted on NVIDIA A40.
439
+
440
+ # B.4 Sensitivity Analysis
441
+
442
+ In this section, we conduct experiments on ICE14 and ICE18 to further analyze the impact of hyperparameter $k$ in DGAR. The hyperparameter $k$ means the replay data consists of HCPs at $k$ different times.
443
+
444
+ To explore how the number of $k$ affects the model's ability to retain historical knowledge, we set different values for $k$ . The results are shown in Figure 6, including MRR and Hits@3 results on the historical tasks. A larger $k$ means the replay data consists of HCPs at more different times. The results reveal that on the ICE14 dataset, DGAR demonstrates an initial improvement followed by a plateau as $k$ increases. These results indicate that selecting HCPs at more different times for historical distribution representations replay does not significantly enhance the final performance but instead increases computational costs. Notably, even with only 5 recall time slices, DGAR outperforms all baseline models. This demonstrates that our approach can effectively help the model retain historical knowledge, even under limited memory constraints.
445
+
446
+ B.5 Inference Efficiency
447
+
448
+ <table><tr><td></td><td colspan="3">ICE14</td><td colspan="3">ICE18</td></tr><tr><td></td><td>k</td><td>Time(s) / Task</td><td>MRR</td><td>k</td><td>Time(s) / Task</td><td>MRR</td></tr><tr><td>Retrain</td><td>—</td><td>553.48</td><td>49.80</td><td>—</td><td>530.48</td><td>31.35</td></tr><tr><td rowspan="4">DGAR</td><td>5</td><td>4.00</td><td>44.93</td><td>5</td><td>13.12</td><td>31.11</td></tr><tr><td>25</td><td>5.21</td><td>49.90</td><td>15</td><td>16.32</td><td>32.43</td></tr><tr><td>35</td><td>5.42</td><td>50.12</td><td>25</td><td>18.83</td><td>33.00</td></tr><tr><td>45</td><td>5.68</td><td>50.05</td><td>35</td><td>19.41</td><td>33.11</td></tr><tr><td>FT</td><td>—</td><td>2.53</td><td>37.46</td><td>—</td><td>5.48</td><td>25.35</td></tr><tr><td>ER</td><td>—</td><td>2.86</td><td>42.14</td><td>—</td><td>5.16</td><td>27.20</td></tr><tr><td>TIE</td><td>—</td><td>4.38</td><td>41.07</td><td>—</td><td>10.71</td><td>29.31</td></tr><tr><td>LKGE</td><td>—</td><td>2.70</td><td>37.51</td><td>—</td><td>5.03</td><td>25.56</td></tr><tr><td>IncDE</td><td>—</td><td>4.07</td><td>36.57</td><td>—</td><td>9.71</td><td>25.52</td></tr></table>
449
+
450
+ We report the average time cost of each task and
451
+
452
+ the MRR on historical tasks for DGAR at different $k$ values in Table 5. The hyperparameter $k$ means the replay data consists of HCPs at $k$ different times. We further report the average time consumption of each task and the MRR on historical tasks under different baselines and retaining settings in Table 5. Unlike DGAR and the baselines, retraining involves reprocessing the entire dataset whenever new data arrives. By comparing the results, our method outperforms retraining and requires less time. This shows the powerful ability of our model in dealing with catastrophic forgetting. Because our model uses more complex operations than the baseline in order to retain more historical knowledge, it is more time-consuming. Future research will focus on enhancing reasoning efficiency while preserving the accuracy of historical knowledge retention.
453
+
454
+ # B.6 Effect Analysis of $\mathcal{L}_T$
455
+
456
+ In the w/o $\mathcal{L}_r$ variant, the performance of the historical tasks drops significantly. This shows that the $\mathcal{L}_{r,t}$ loss in Section 4.4 effectively alleviates the loss of historical information caused by current data optimization and Diff-HDG.
457
+
458
+ # B.7 Random Selection of HCPs
459
+
460
+ In order to verify whether the generalization of replay data is enhanced, we added an additional experiment to prove. Compared to replaying historical data from the specified $k$ time slices, randomly selecting across the entire history provides more global and generalizable information. Therefore, we specifically added an experiment where the historical context prompts from the $k$ nearest time slices was selected as the replay data. The experimental results are in the Table 6:
461
+
462
+ Table 5: Inference efficiency analysis.
463
+
464
+ <table><tr><td colspan="3">ICE14</td><td colspan="3">ICE18</td></tr><tr><td>k</td><td colspan="2">Average (MRR)</td><td>k</td><td colspan="2">Average (MRR)</td></tr><tr><td></td><td>nearest</td><td>random</td><td></td><td>nearest</td><td>random</td></tr><tr><td>25</td><td>46.64</td><td>49.90</td><td>15</td><td>31.23</td><td>32.43</td></tr><tr><td>35</td><td>48.11</td><td>50.12</td><td>25</td><td>31.89</td><td>33.00</td></tr><tr><td>45</td><td>47.86</td><td>50.05</td><td>35</td><td>32.03</td><td>33.11</td></tr></table>
465
+
466
+ Table 6: Effect of Random Selection
467
+
468
+ The nearest refers to replaying with historical context prompts sampled from the nearest $k$ time slices. As shown in the Table 6, regardless of different $k$ values, the random outperforms the nearest on the historical tasks. The experimental results
469
+
470
+ are consistent with our expectations. Mainly because the random selection of historical context prompts provides the model with more generalized data, thus improving the model's performance on the test set.
471
+
472
+ # B.8 Compare Base on LogCL
473
+
474
+ To further verify whether DGAR can enhance the performance of recent GNN-based models under the CL setting, we conducted the following experiment.
475
+
476
+ We selected LogCL (Chen et al., 2024b), a representative GNN-based TKGR model from recent works, as the base model. Below, we report its performance under the CL setting (FT) and its performance when combined with DGAR in the same setting on two datasets.
477
+
478
+ <table><tr><td colspan="4">ICE14</td><td colspan="3">ICE18</td></tr><tr><td rowspan="2">Algo.</td><td>Current</td><td colspan="2">Average</td><td>Current</td><td colspan="2">Average</td></tr><tr><td>MRR</td><td>MRR</td><td>Hits@10</td><td>MRR</td><td>MRR</td><td>Hits@10</td></tr><tr><td>FT</td><td>31.63</td><td>28.93</td><td>48.61</td><td>38.46</td><td>30.47</td><td>55.52</td></tr><tr><td>DGAR</td><td>37.12</td><td>33.08</td><td>53.83</td><td>43.20</td><td>32.88</td><td>58.88</td></tr></table>
479
+
480
+ Table 7: Performance base on LogCL
481
+
482
+ The above experiments show that DGAR significantly enhances LogCL's reasoning performance under the CL setting. Interestingly, LogCL performs worse on the ICE14 dataset than on ICE18, which contrasts with its performance in the full-training setting. This discrepancy occurs because ICE14 has a simpler data distribution compared to ICE18, while LogCL's complex model structure makes it prone to overfitting on ICE14. Under the CL setting, highly complex models struggle to maintain stable learned features as new data is introduced (Lee, 2024). DGAR mitigates this issue by replaying historical information, helping LogCL retain its learned features more effectively. Consequently, TKGR methods that excel in full-training scenarios may not necessarily achieve better reasoning performance under CL setting.
2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c142a913d2821b1d23d282d7143bc421af50c2022d9cd8f0ea2d9164ff9b85b4
3
+ size 612013
2025/A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/80295725-d200-4ec3-8d76-7e55ad235eea_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/80295725-d200-4ec3-8d76-7e55ad235eea_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/80295725-d200-4ec3-8d76-7e55ad235eea_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abc99f72e1d5591c2cd3ae640d4a8d69b88ef9ebb05836107b83aee51023a1be
3
+ size 4906258
2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/full.md ADDED
@@ -0,0 +1,710 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment
2
+
3
+ Jean-Philippe Corbeil<sup>1*</sup>, Amin Dada<sup>4</sup>, Jean-Michel Attendu<sup>1</sup>, Asma Ben Abacha<sup>1</sup>, Alessandro Sordoni<sup>2,3</sup>, Lucas Caccia<sup>2</sup>, François Beaulieu<sup>1</sup>, Thomas Lin<sup>1</sup>, Jens Kleesiek<sup>4†</sup>, Paul Vozila<sup>1</sup>
4
+
5
+ <sup>1</sup>Microsoft Healthcare & Life Sciences <sup>2</sup>Microsoft Research Montréal, Canada <sup>3</sup>Mila, Université de Montréal, Canada <sup>4</sup>IKIM, University Hospital Essen, Germany
6
+
7
+ # Abstract
8
+
9
+ High computation costs and latency of large language models such as GPT-4 have limited their deployment in clinical settings. Small language models (SLMs) offer a cost-effective alternative, but their limited capacity requires biomedical domain adaptation, which remains challenging. An additional bottleneck is the unavailability and high sensitivity of clinical data. To address these challenges, we propose a novel framework for adapting SLMs into high-performing clinical models. We introduce the MediPhi collection of 3.8B-parameter SLMs developed with our novel framework: pre-instruction tuning of experts on relevant medical and clinical corpora (PMC, Medical Guideline, MedWiki, etc.), model merging, and clinical-tasks alignment. To cover most clinical tasks, we extended the CLUE benchmark to $\mathrm{CLUE+}$ , doubling its size. Our expert models deliver relative improvements on this benchmark over the base model without any task-specific fine-tuning: $64.3\%$ on medical entities, $49.5\%$ on radiology reports, and $44\%$ on ICD-10 coding (outperforming GPT-4-0125 by $14\%$ ). We unify the expert models into MediPhi via model merging, preserving gains across benchmarks. Furthermore, we built the MediFlow collection, a synthetic dataset of 2.5 million high-quality instructions on 14 medical NLP tasks, 98 fine-grained document types, and JSON format support. Alignment of MediPhi using supervised fine-tuning and direct preference optimization achieves further gains of $18.9\%$ on average.
10
+
11
+ # 1 Introduction
12
+
13
+ Advances in natural language processing (NLP) have enabled large language models (LLMs) like
14
+
15
+ ![](images/8f7b674c6584669256a31e4a1afc03658f28271dd44c4243d248007f163d8792.jpg)
16
+ Figure 1: Our approach in two steps: 1) continual pretraining; 2) alignment. 1) Starting from Phi3.5-mini-instruct: 1.a) We leverage knowledge acquisition methods such as pre-instruction tuning on diverse medical and clinical corpora to obtain domain-specific experts. 1.b) We recur to model merging to merge experts first with the base model to combine new knowledge as well as recover skills degraded by the previous step, then together to form an unified model, MediPhi. 2) We generate MediFlow, a synthetic instruction dataset MediFlow for clinical tasks. We align our model on MediFlow using Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) and obtain MediPhi-Instruct. Segmentation of model parameters as equal block size are only for illustrative purposes.
17
+
18
+ GPT-4 to excel in medical tasks, especially medical exams (Nori et al., 2023; Ben Abacha et al., 2024; Nori et al., 2024). However, their deployment in clinical settings<sup>1</sup> faces many challenges,
19
+
20
+ including high latency and cost (Yang et al., 2023; Dennstadt et al., 2025). As LLM progress faces diminishing returns from scaling (Udandarao et al., 2024; Longpre et al., 2024; Muennighoff et al., 2023; Villalobos et al., 2022), specialized small language models (SLMs) provide a viable alternative (Sardana et al., 2024), when optimized for domain-specific performance, lower computational requirements, and real-world clinical integration.
21
+
22
+ Developing high-quality clinical language models is hindered by the unavailability of clinical data which are sensitive and tightly licensed, e.g. protected health information under HIPAA. Current medical LLMs perform well on multiple-choice question datasets but struggle with real-world clinical complexities (Dada et al., 2024; Chen et al., 2024; Liu et al., 2024; Jeong et al., 2024a,b). Furthermore, both the inaccessibility of clinical data and the misalignment of current continual pretraining methods for clinical tasks are critical limitations in the context of SLMs, which have limited capacity. Addressing these gaps require innovative training strategies tailored to small models.
23
+
24
+ This work introduces a modular framework for building high-performance medical SLMs, leveraging pre-instruction tuning (Jiang et al., 2024), model merging, and clinical alignment. Using pre-instruction tuning, we adapt Phi3.5 mini with 3.8B parameters (Abdin et al., 2024a) into experts trained on diverse medical and clinical corpora. We unify these models through model merging into one SLM which preserves benchmark improvements. We complete training by aligning the model with MediFlow, a new synthetic instruction dataset on clinical tasks. A representation of this approach is given in Figure 1.
25
+
26
+ Our contributions include:
27
+
28
+ - Introducing the MediPhi family, the first collection of high-performance SLMs for medical and clinical applications under a commercially permissive license. The collection includes both a generalist expert model and specialized variants.
29
+ - Releasing of the MediFlow dataset of 2.5 million high-quality synthetic instructions generated with a GPT-4o based agentic pipeline,
30
+
31
+ also under a commercially permissive license<sup>3</sup>. This dataset for the clinical domain contains 14 tasks categories, 98 fine-grained input documents, 6 difficulty levels, and 2 output formats (JSON or plain text), filling a gap in the clinical NLP resources. Additionally, we release synthetic validation sets designed to guide the model merging algorithm.
32
+
33
+ - An extension of CLUE to the CLUE+ benchmark by doubling its size from 6 to 12 datasets including complementary clinical tasks and input documents, allowing a comprehensive evaluation of medical and clinical capabilities of language models (e.g. radiology reports, medications, medical error detection, doctor-patient dialog summarization, information extraction of social determinants of health, and medical coding).
34
+ - A demonstration of the effectiveness of preinstruction tuning for medical domain adaptation extending the method beyond question-answering with named entity recognition, relation extraction, and summarization.
35
+ - A study on ICD10CM medical coding, in terms of domain adaptation and benchmarking of medical models, with relative improvements up to $44\%$ over the base model, surpassing GPT-4-0125 by $14\%$ .
36
+
37
+ # 2 Previous Work
38
+
39
+ # 2.1 Medical Large Language Models
40
+
41
+ Researchers have developed various open-weight LLMs with diverse capabilities and research licenses in the medical NLP domain. Examples include ClinicalCamel (Toma et al., 2023), Med42 (Christophe et al., 2023), PMC-Llama (Wu et al., 2024), BioMedGPT (Zhang et al., 2024), Meditron (Chen et al., 2023), BioMistral (Labrak et al., 2024) and Asclepius (Kweon et al., 2024). Most recent medical LLMs (Chen et al., 2023; Christophe et al., 2024a; Gururajan et al., 2024; Ankit Pal, 2024) are based on Llama3 (Dubey et al., 2024) — 8B and 70B parameters. Google also trained their own medical LLM named Med-PaLM 2 (Singhal et al., 2023) with 340B parameters, which is not publicly available.
42
+
43
+ # 2.2 Medical Instruction Tuning
44
+
45
+ Recently, authors have released instruction-tuned models for the medical domain: Aloe (Gururajan et al., 2024), Hippocrates (Acikgoz et al., 2024), and Med42 v2 (Christophe et al., 2024a,b). The alignment phase of these three models includes similar instruction datasets such as medical question-answering databases, non-medical alignment data (e.g. UltraChat by Ding et al.), and benchmark training sets — e.g. MedQA (Jin et al., 2021), PubMedQA (Jin et al., 2019) and ACI-Bench (Yim et al., 2023). While this mix of data has contributed to improvements, it also introduces imbalances in task and document coverage, as well as in-distribution evaluations in some cases, i.e. closer to evaluation in fine-tuning setting instead of zero-shot/few-shot setting.
46
+
47
+ # 2.3 Synthetic Instructions
48
+
49
+ Phi 1 & 2 (Gunasekar et al., 2023; Li et al., 2023), with 1.3B and 2.7B parameters, respectively, showed strong reasoning performance using synthetic textbook-like data. Phi-3 and Phi-3.5 (Abdin et al., 2024a) scaled model size to 3.8B parameters and topic coverage. Phi4 (Abdin et al., 2024b) at 14B parameters is trained using an iterative data generation process. Similarly, Orca (Mukherjee et al., 2023; Mitra et al., 2024) focused on reasoning data, while UltraChat (Ding et al., 2023) targeted multi-turn prompts. Zhang et al. (2023b) generated 52,000 medical question-answering instructions based on forms filled by experts. In the clinical domain, Kweon et al. (2024) generated 158k short question-answering instructions for 8 tasks with synthetic clinical documents as inputs, seeded from the PMC-Patients corpus (Zhao et al., 2023). MagPie (Xu et al., 2024) proposed the generation of synthetic instructions in general domains using LLMs' chat template.
50
+
51
+ # 2.4 Model Merging
52
+
53
+ Model Soup (Wortsman et al., 2022) demonstrated improvements over single checkpoint evaluation by averaging training checkpoints, leading to methods like spherical linear interpolation (SLERP), enabled by linear-mode connectivity (Frankle et al., 2020; Mirzadeh et al.). To extend to multiple models, authors have proposed task arithmetic (Ilharco et al.) while techniques such as TIES (Yadav et al., 2024a), DARE (Yu et al., 2024), and BreadCrumbs (Davari and Belilovsky, 2024) address the param
54
+
55
+ eter interference issue. Hammoud et al. (2024) highlighted the importance of validation sets for an optimal merge. Large-scale experiments (Yadav et al., 2024b) showed the effectiveness of merging multiple experts, especially from large instruction models. Merging has also proven as effective as data-mix strategies during pre-training (Ahmadian et al., 2024; Na et al., 2024).
56
+
57
+ # 2.5 Clinical Benchmarking
58
+
59
+ Medical LLMs are commonly benchmarked on medical knowledge through multiple-choice datasets such as MedQA (Jin et al., 2021), MedMCQA (Pal et al., 2022), PubMedQA (Jin et al., 2019), and MMLU-medical (Hendrycks et al., 2020). However, recent studies (Liu et al., 2024; Dada et al., 2024; Chen et al., 2024; Jeong et al., 2024a,b) revealed a gap on the clinical domain from medical LLMs.
60
+
61
+ # 3 Methodology
62
+
63
+ Our method for clinical SLMs as illustrated in Figure 1 includes two steps: 1) continual pre-training composed of 1.a) domain knowledge acquisition methods and 1.b) model merging, and 2) posttraining with supervised fine-tuning and direct preference optimization on generated synthetic data.
64
+
65
+ # 3.1 Continual Pre-Training
66
+
67
+ # 3.1.1 Datasets
68
+
69
+ In Table 1, we list the medical and clinical corpora with permissive licenses, used to adapt our SLM into five experts. We separate these into five groups: PubMed, Clinical, MedCode, Guidelines, and MedWiki. We describe these dataset groups with their licensing status in detail in Appendix A.1.
70
+
71
+ Table 1: Medical and clinical data sources separated into five groups: PubMed, Clinical, MedCode, Guidelines, and MedWiki.
72
+
73
+ <table><tr><td>Groups</td><td>Source</td><td>Document Type</td><td>#Docs</td><td>#Tokens</td></tr><tr><td rowspan="2">PubMed</td><td>PMC</td><td>Scientific Articles</td><td>3.8M</td><td>42B</td></tr><tr><td>PMC Abstract</td><td>Scientific Abstracts</td><td>36M</td><td>6B</td></tr><tr><td rowspan="4">Clinical</td><td>PMC-Patients</td><td>Patient summaries</td><td>156k</td><td>130M</td></tr><tr><td>Asclepius</td><td>Clinical Documents</td><td>80k</td><td>44M</td></tr><tr><td>NoteChat</td><td>Conversations</td><td>80k</td><td>72M</td></tr><tr><td>MTSamples</td><td>Clinical Documents</td><td>5k</td><td>4M</td></tr><tr><td>MedCode</td><td>ICD9/10&amp;ATC</td><td>Webpages</td><td>206k</td><td>257M</td></tr><tr><td>Guidelines</td><td>Guidelines</td><td>Websites</td><td>37k</td><td>90M</td></tr><tr><td>MedWiki</td><td>MedWiki</td><td>Encyclopedia</td><td>80k</td><td>80M</td></tr></table>
74
+
75
+ # 3.1.2 Domain-Specific Pre-Training
76
+
77
+ We consider three methods to enhance domain knowledge of language models: domain adaptation pre-training (DAPT), textbook-like synthetic material (Explainer), and pre-instruction tuning (PIT). The latter showed considerable improvements in our experiments. We apply these techniques on the base model to obtain five experts in the medical and clinical domain from our five dataset groups. We describe continual pre-training optimization strategies in Appendix A.2.
78
+
79
+ DAPT (Ganin and Lempitsky, 2015; Gururangan et al., 2020) Domain Adaptation Pre-Training is a technique for adapting a deep learning model to a specific domain by performing next-token prediction on domain-specific corpora. The expert that is trained on the PubMed group follows standard DAPT, because it is orders of magnitude larger than the others. We also train on all the data a DataMix baseline using DAPT, which is outperformed by our approach. For this method, we trained models for one epoch with a cosine scheduler at a maximum learning rate of 5e-5.
80
+
81
+ Explainer (Gunasekar et al., 2023) Textbook-like material demonstrated the effectiveness of training SLMs on textbook-quality data generated by a strong LLM. Therefore, we generate a textbook-like "explainer" using GPT-4o-0806 for the MedCode dataset group, for which the dense format of the webpages impedes model learning. For this method, we trained models for two epochs with a cosine scheduler at a maximum learning rate of 1e-4.
82
+
83
+ PIT (Jiang et al., 2024) Pre-Instruction Tuning $^{4}$ (PIT) demonstrated significant improvements over the conventional training paradigm, by first finetuning on instruction-like data, followed by training on a concatenation of the instruction data and pretraining corpus.
84
+
85
+ PIT requires the generation of task data for each document in the corpus. While Jiang et al. (2024) originally used question-answering (QA) as the primary task, we expand the approach to include summarization, named entity recognition, and relation extraction. We use GPT-4o-0806 to generate outputs for all four tasks on the ICD10CM subset of the MedCode dataset as an initial case study. Based on the results, we extend this process
86
+
87
+ to the remaining four dataset groups — Clinical, MedCode, Guidelines, and MedWiki — to train multiple expert models.
88
+
89
+ Each expert undergoes sequential training in two phases. The first phase involves fine-tuning on the generated outputs from a single task for one epoch using a cosine scheduler (peak learning rate at 1e-4). If the task contains multiple elements, such as several question-answer pairs, they are concatenated into a single sequence, separated by end-of-sentence (EOS) tokens. In the second phase, the model is fine-tuned for two epochs with another cosine scheduler (peak learning rate at 3e-4) on the concatenation of the task data and the original documents, with EOS tokens acting as separators.
90
+
91
+ ![](images/288dff44fdbabc8c263f5a5afec8ecd86e5bbc1df08a3a9b8abc108870a73a30.jpg)
92
+ Figure 2: Overview of model merging techniques. a. Domain-specific merging via SLERP: An expert model is obtained by fine-tuning the base model on domain-specific data (step i). The expert and base models are then merged using spherical linear interpolation (SLERP) to produce the final merged model (step ii). b. Multi-expert merging: Multiple expert models are independently derived from the base model via domain-specific pretraining (step i). These are then combined using a merging operator (e.g., Task Arithmetic, Ties, or BreadCrumbs) to produce a unified merged model (step ii).
93
+
94
+ # 3.1.3 Domain-Specific Model Merging
95
+
96
+ We train five expert models specializing in different aspects of medical and clinical knowledge, each derived from a distinct dataset group: PubMed, Clinical, MedCode, Guidelines, and MedWiki.
97
+
98
+ Following the success of BioMistral (Labrak et al., 2024), the first approach involves merging each expert model individually with the base model using SLERP<sup>5</sup> as in step $a$ of Figure 2. We determined the merging proportions (10%, 25% or 50%) via validation sets (see below). We apply merging after PIT since these techniques demonstrate a synergistic effect. While PIT enhances domain-specific learning, it also leads to catastrophic forgetting — degrading the model's initial abilities such as instruction following, long context handling, and multilingual support (Scialom et al., 2022). This issue is particularly evident in zero-shot and few-shot settings, where instruction-tuned models generally outperform their base counterparts (Longpre et al., 2023; Zhang et al., 2023a). Merging with the original instruction model after PIT mitigates these degradations, preserving general capabilities while maximizing domain adaptation<sup>6</sup>.
99
+
100
+ # 3.1.4 Multi-Expert Merging into MediPhi
101
+
102
+ Our second set of experiments combines all five experts into a unified SLM forming MediPhi as in step $b$ of Figure 2. Multi-model merging involves three primary techniques: Task Arithmetic, TIES, and BreadCrumbs. Given the vast configuration space of multi-model merges, we employ an evolutionary algorithm via MergeKit (Goddard et al., 2024) to optimize the merging process.
103
+
104
+ However, optimization on our benchmark is not feasible due to a lack of validation data and framework incompatibilities. To address this, we generate synthetic validation sets aligned with our benchmark tasks. Specifically, we prompt GPT-4o-0806 to create multiple-choice question sets covering 12 medical and clinical topics relevant to our benchmark (e.g., doctor-patient interactions, medical coding, discharge summaries). These sets maintain contextual consistency with our evaluation tasks. The evolutionary algorithm is set to terminate after 500 evaluations, guided by average accuracy on these validation sets. Further details about validation set creation procedures are provided in Appendix A.3.
105
+
106
+ # 3.1.5 Evaluation Metrics
107
+
108
+ To identify the top-performing model, we primarily measure the average accuracy on CLUE+. However, we also consider important that the expert achieves uniform improvements, especially among experts with similar accuracies. For this purpose, we use $\# DG$ , the number of datasets on which the model achieves gains, and CV $\Delta$ , the coefficient of variations of gains/losses as defined in equation 1.
109
+
110
+ $$
111
+ C V \Delta = \frac {\sqrt {\mathbb {E} _ {d \sim \mathcal {D}} \left[ \left(\delta_ {d} - \mu_ {d}\right) ^ {2} \right]}}{| \mu_ {d} |} \tag {1}
112
+ $$
113
+
114
+ where $\delta_d$ is the expert accuracy minus the baseline one for the $d^{th}$ dataset in the benchmark $D$ and $\mu_d = \mathbb{E}_{d\sim \mathcal{D}}[\delta_d]$ . A small CV $\Delta$ indicate uniform gains or losses across datasets, while a high value indicates gains or losses on a narrow subset of datasets.
115
+
116
+ # 3.2 Clinical Alignment
117
+
118
+ # 3.2.1 Data Generation Pipelines
119
+
120
+ ![](images/1bbe4da17858c03e90ec43b0e0b4f246c3ea6385c32afc2b65ff54f269b23c66.jpg)
121
+ Figure 3: Schema of MediFlow generation. i) Given a randomly sampled set of parameter values, we prompt GPT-4o-0806 for $N$ instructions to obtain triplets (instruction, inputs, outputs) in which we have $P$ pairs of inputs ( $I$ ) and outputs ( $O$ ) each. ii) We prompt GPT-4o mini (2024-07-18) with LLM-as-a-Judge and self-consistency on $M$ samples. iii) We define a heuristic to filter a diverse, high-quality subset for alignment purposes.
122
+
123
+ Generation of MediFlow In Figure 3, we illustrate our agentic pipeline to generate the MediFlow dataset.
124
+
125
+ At step $i$ ), we prompt GPT-4o-0806 with a meta-prompt to generate several triplets
126
+
127
+ (instruction, inputs, outputs) at once conditioned on five parameters: input-data type, task type, difficulty level, output format, and temperature. For this step, we also apply temperatures of 1.0 (70% of the dataset) or 1.25 (30% of the dataset), to strike a balance between accuracy or diversity, respectively. Specifically, we request 10 instructions at a time with four input-output pairs each.
128
+
129
+ In step ii), we prompt GPT-4o mini (2024-07-18) with a LLM-as-a-Judge approach — using self-consistency with chain-of-thought across $M = 5$ samples at temperature of 1.0 — to provide a critical assessment of the synthetic instruction. We provide five criteria in the prompt on a scale of 1 to 4: quality, clarity, alignment, realism, and difficulty. We compute the final score $S_{j} \in [1,4]$ of the $j^{th}$ criterion by summing individual scores $s_{ij} \in \{1,2,3,4\}$ multiplied by their respective counts $c_{i} \in [0,M]$ (constrained by $\sum_{i=1}^{M} c_{i} = M$ ) as in $S_{j} = \frac{1}{M} \sum_{i=1}^{M} c_{i} \cdot s_{ij}$ .
130
+
131
+ At the final step iii), we use a heuristic to trim the collection down to its top- $K$ highest quality samples based on the quality criteria. We added details of the process in Appendix A.5.
132
+
133
+ Generation of MediFlow-DPO From MediFlow, we filter the 130k top-quality samples stratified by task type, input-data type and output format to further align the SLM with direct preference optimization (DPO) after SFT. We generate marginally wrong outputs (i.e. rejected outputs) prompting GPT-4o-0806 as an error inducer. We provide the detail in Appendix A.7.
134
+
135
+ # 3.2.2 Alignment Process SFT + DPO
136
+
137
+ We train MediPhi, the multi-expert merged SLM, with supervised fine-tuning (SFT) on MediFlow to obtain MediPhi-SFT. We then align MediPhi-SFT with DPO which leads to MediPhi-Instruct (SFT + DPO). We provide the hyperparameters for both settings in Tables 7 and 8 of Appendix A.2.
138
+
139
+ # 4 Experiments
140
+
141
+ # 4.1 CLUE+ Benchmark
142
+
143
+ The CLUE Benchmark (Dada et al., 2024) covers six datasets: MedNLI (Romanov and Shivade, 2018), MeQSum (Ben Abacha and Demner-Fushman, 2019), Problem List Summarization (Gao et al., 2023), LongHealth (Adams et al., 2024),
144
+
145
+ MeDiSumQA (Dada et al., 2025), and MeDiSumCode. For these datasets, we implement the same configuration (i.e. prompts, metrics, and few shots) as Dada et al. (2024). While CLUE focused on six tasks using clinical notes and discharge summaries as input data, we extend it with additional clinical input documents (e.g., radiology reports, doctor-patient dialog) and tasks (e.g. information extraction, medical error detection) to obtain a broader assessment of clinical abilities. We introduce the CLUE+ benchmark, with six additional datasets: MedicationQA (Ben Abacha et al., 2019), MEDIQA-RRS QA (Ben Abacha et al., 2021), MEDEC (Ben Abacha et al., 2024), ACI-Bench (Yim et al., 2023), Social Determinant of Health (Lybarger et al., 2023), and MedConceptsQA ICD10CM (Shoham and Rappoport, 2024). We provide further information in the Appendix A.8 in Tables 9 and 10.
146
+
147
+ # 4.2 Continual Pre-Training
148
+
149
+ To assess the efficiency of different continual pretraining methods, we focus on ICD10CM medical coding webpages from the MedCode dataset. After continual pre-training on this subset, we test the models on ICD10CM questions from the MedConceptsQA dataset (Shoham and Rappoport, 2024). We left out the easy and medium difficulty questions to focus on the most challenging questions.
150
+
151
+ # 4.2.1 DAPT vs. Explainer vs. PIT
152
+
153
+ In Figure 4, we plot the accuracy of different continual pre-training methods DAPT, Explainer and PIT, as described in Sec. 3.1.2. First, we note that DAPT diminishes the performance to random (Webpage). We hypothesize that the webpages have peculiar implicit format. One solution is to reformulate them as textbook-like explainers. The model finetuned on explainers is improving upon the baseline by $6\%$ (Explainers). We see that using PIT by generating summaries (Summary) increases performance further by $8\%$ percent.
154
+
155
+ # 4.2.2 Domain-Specific Merging
156
+
157
+ In Figure 5, we experiment with merging back into the base model after fine-tuning on the explainers or training with PIT.
158
+
159
+ We observe significant improvements across tasks for SLERP-merged models, with the Summary model performing best. Although question answering (QA), named entity recognition (Entities), and relation extraction (Relations)
160
+
161
+ ![](images/0c46764dc3600d42ace487e732ef68f6b94b9407317a2d5769d218e94741e467.jpg)
162
+ Figure 4: Performance of different continual pretraining methods: domain adaptation pre-training (DAPT) on ICD10CM webpages, fine-tuning on synthetic textbook-like material generated with GPT-400806 (Explainer), and pre-instruction tuning (PIT) with different synthetically generated tasks: QA, summarization, NER, and relation extraction.
163
+
164
+ ![](images/6cb4dcbff24c311f85cafa5bdf5944d730d0f72146942c262135dc33b077a04b.jpg)
165
+ Figure 5: Impact of SLERP merging on the performance of ICD10CM. Merging back with the base model (SLERP $50\%$ ) systematically results in gains and these are more pronounced for PIT.
166
+
167
+ initially decline compared to the base model before merging, SLERP not only boosts the explainer model to $58\%$ (a $28\%$ relative improvement over the base model) but also enhances all PIT-trained models further—reaching $62\%$ (38% relative) for QA and $65\%$ (44% relative) for Summary, surpassing GPT-4-0125 by $8\%$ (14% relative).
168
+
169
+ Based on these results, the rest of the paper will apply PIT with summaries, unless otherwise stated.
170
+
171
+ # 4.2.3 Multi-Domain Merging
172
+
173
+ We present the results on the CLUE+ benchmark of the five medical and clinical experts, adapted on the dataset groups and merged back with the base
174
+
175
+ model with SLERP in Table 2. While the DataMix merged model trained on all the corpora (i.e. similar to BioMistral) achieves gains over the base model, the individual experts realize further gains, except for MedCode. The latter improves over the base models but mostly only on coding datasets (i.e. MeDiSumCode and MedConceptsQA, see Tables 11 and 12 in Appendix A.8 for detailed benchmark). The highest enhancement comes from MedWiki on 10 out of 12 datasets, with an average improvement of $3.2\%$ ( $8.8\%$ relative). One possible explanation is that the model is learning better on educational contents — such as encyclopedic material — with vast coverage of medical concepts. We observe that removing PIT and/or merging on the Guideline group lead to the worst outcomes below the baselines.
176
+
177
+ Table 2: Average performances on CLUE+ of SLERP- merged experts obtained with PIT on the five dataset groups. We provide DataMix as a baseline SLERP expert trained on all dataset groups. We also provide an ablation study on Guideline.We indicate gains and losses over the base model. #DG stands for datasets with gains (out of 12 datasets). CV $\Delta$ stands for coefficient of variations of gains/losses.
178
+
179
+ <table><tr><td></td><td></td><td>AVG↑</td><td>#DG↑</td><td>CV Δ↓</td></tr><tr><td>Baselines</td><td>Phi-3.5 mini</td><td>36.5</td><td>-</td><td>-</td></tr><tr><td></td><td>DataMix</td><td>37.5</td><td>10</td><td>1.2</td></tr><tr><td></td><td>PubMed</td><td>37.7</td><td>9</td><td>1.8</td></tr><tr><td rowspan="4">SLERP Experts</td><td>Clinical</td><td>39.6</td><td>10</td><td>2.0</td></tr><tr><td>MedWiki</td><td>39.7</td><td>10</td><td>1.5</td></tr><tr><td>MedCode</td><td>36.7</td><td>5</td><td>39.6</td></tr><tr><td>Guideline</td><td>39.2</td><td>10</td><td>1.8</td></tr><tr><td></td><td>w/o SLERP</td><td>27.2</td><td>4</td><td>1.0</td></tr><tr><td rowspan="2">Guideline Ablations</td><td>w/o PIT</td><td>33.0</td><td>6</td><td>0.9</td></tr><tr><td>w/o SLERP&amp;PIT</td><td>25.2</td><td>1</td><td>2.8</td></tr></table>
180
+
181
+ We present the results on CLUE+ of combining all experts using three multi-model merging techniques (Task-Arithmetic, Ties-merging, and BreadCrumbs) in Table 3. While Task-Arithmetic yields the highest score of 39.4 with gains on 9 out of 12 benchmark datasets, the models from Ties and BreadCrumbs remain competitive, demonstrating more uniform gains across the benchmark, as indicated by their lower CV $\Delta$ . Since the goal is to select the most robust model for the align
182
+
183
+ Table 3: Average performance on CLUE+ of unifying experts into one SLM. We leverage Task-Arithmetic (TA), TIES-merging and Breeclcrms (BC) for merging 5 experts: PubMed, Clinical, MedWiki, MedCode and Guideline. We provide statistics across dataset performances of the SLERP experts, translating into worst case (minimum), average, and best case (maximum).We indicate gains and losses over the baseline. #DG stands for datasets with gains (out of 12 datasets). CV $\Delta$ stands for coefficient of variations of gains/losses.
184
+
185
+ <table><tr><td></td><td></td><td>AVG↑</td><td>#DG↑</td><td>CV Δ↓</td></tr><tr><td>Baseline</td><td>Phi-3.5 mini</td><td>36.5</td><td>-</td><td>-</td></tr><tr><td rowspan="4">SLERP Experts</td><td>Minimum</td><td>34.5</td><td>4</td><td>1.9</td></tr><tr><td>Average</td><td>38.5</td><td>7</td><td>2.1</td></tr><tr><td>Maximum</td><td>43.1</td><td>11</td><td>1.1</td></tr><tr><td>DataMix</td><td>37.5</td><td>10</td><td>1.2</td></tr><tr><td rowspan="3">Unified SLM</td><td>Task-Arithmetic</td><td>39.4</td><td>9</td><td>1.9</td></tr><tr><td>Ties</td><td>39.3</td><td>7</td><td>1.7</td></tr><tr><td>BreadCrumbs</td><td>39.3</td><td>11</td><td>1.5</td></tr></table>
186
+
187
+ ment phase, we find that the BreadCrumbs expert offers the best trade-off between high-amplitude improvements and consistent gains on 11 datasets. We also see that all unified SLMs are above the average expectation of the SLERP performances by $0.9\%$ , but below the maximum values and below the top-performing MedWiki expert by $0.3\%$ . Yet, this expert improves only on 10 datasets, one less than the BreadCrumbs-merged expert. We also observe for BreadCrumbs a stronger improvement over MedWiki of $10.6\%$ on ICD10CM medical coding (see Table 12 in Appendix A.8), which is still lower than the specialized MedCode expert with an improvement of $36.9\%$ over the MedWiki expert. We select the BreadCrumbs expert as MediPhi for its strong, balanced average score on CLUE+.
188
+
189
+ # 4.3 Post-Training
190
+
191
+ We show the results of aligning SLMs with the Mediflow dataset, as well as MediFlow-DPO in Table 4. To begin, we note that the alignment of MediPhi on all instructions (2.5M) leads to an accuracy of 41.9, an improvement of $5.4\%$ over Phi3.5. By filtering MediFlow for top-quality 800k, we push the gain further by $6.5\%$ to $43.0\%$ . Then, we apply DPO using MediFlow-DPO to realize an improvement of $6.9\%$ (18.9% relative), of which the maximum average ends up at $43.4\%$ . Notable
192
+
193
+ Table 4: Average performance on CLUE+ of aligning Mediphi with MediFlow using SFT followed by DPO. We indicate gains and losses over the baseline. #DG stands for datasets with gains (out of 12 datasets). CV $\Delta$ stands for coefficient of variations of gains/losses.
194
+
195
+ <table><tr><td></td><td></td><td>AVG↑</td><td>#DG↑</td><td>CV Δ↓</td></tr><tr><td rowspan="7">Alignment of SLMs</td><td>Phi3.5 mini</td><td>36.5</td><td>-</td><td>-</td></tr><tr><td>+SFT 800K</td><td>42.2</td><td>9</td><td>1.4</td></tr><tr><td>+DPO</td><td>42.2</td><td>8</td><td>1.4</td></tr><tr><td>MediPhi</td><td>39.3</td><td>11</td><td>1.5</td></tr><tr><td>+SFT 2.5M</td><td>41.9</td><td>9</td><td>1.6</td></tr><tr><td>+SFT 800K</td><td>43.0</td><td>9</td><td>1.4</td></tr><tr><td>+DPO</td><td>43.4</td><td>9</td><td>1.4</td></tr></table>
196
+
197
+ gains in relative are: medical entities (SDoH) with $64.3\%$ and on radiology reports (RRS QA) by $49.5\%$ (see Table 12). If we align Phi3.5 on MediFlow using SFT, we reach an accuracy of $42.2\%$ . While surpassing MediPhi-SFT 2.5M by $0.3\%$ , MediPhi-SFT 800k surpasses it by a margin of $2\%$ (relative). Despite achieving gains overall, we observe a diminution in $\# DG$ from 11 to 9 for all models. The lower-performing datasets for all aligned models are Problem List Summarization and MediSumCode, which we hypothesize this result from a specific bias in MediFlow towards listing tasks like extracting problems from clinical notes or medical codes from discharge summaries (see Tables 11 and 12 in Appendix A.8).
198
+
199
+ Dada et al. (2024) highlighted a performance gap regarding medical LLMs' performances in clinical settings based on the CLUE benchmark. Out of the twelve medical LLMs in their study, only two are improving upon their base model as shown in Table 5: BioMistral with DARE model merging, and Med42 with SFT alignment. By applying PIT, merging, and clinical alignment, MediPhi yields the highest improvement over its base model on CLUE+, achieving a +6.9% gain, compared to +1.1% for BioMistral and +1.2% for Med42.
200
+
201
+ Although Phi3.5 already surpasses Mistral models, the MediPhi SLMs further widen this gap. Despite being less than half the size of LLaMA3 (8B), MediPhi-Instruct (3.8B) achieves near-parity on CLUE+, with a performance difference below $1\%$ . MediPhi-Instruct outperforms LLaMA3 on four key datasets: ICD10CM (+29.2%), MeDiSumCode (+13.9%), RRS QA (+5.8%), and MeQ-
202
+
203
+ Table 5: Performances on CLUE+ of other medical LLMs compared with MediPhi models. We indicate gains and losses. #DG stands for datasets with gains (out of 12 datasets). CV $\Delta$ stands for coefficient of variations of gains/losses.
204
+
205
+ <table><tr><td>Model</td><td>AVG↑</td><td>#DG↑</td><td>CV Δ↓</td></tr><tr><td>Mistral-7B-Instruct-v0.1</td><td>33.6</td><td>-</td><td>-</td></tr><tr><td>BioMistral-7B-DARE</td><td>34.7 (+1.1)</td><td>8</td><td>3.4</td></tr><tr><td>Phi-3.5 mini (3.8B)</td><td>36.5</td><td>-</td><td>-</td></tr><tr><td>MediPhi (3.8B)</td><td>39.3 (+2.8)</td><td>11</td><td>1.5</td></tr><tr><td>MediPhi-SFT (3.8B)</td><td>43.0 (+6.5)</td><td>9</td><td>1.4</td></tr><tr><td>MediPhi-Instruct (3.8B)</td><td>43.4 (+6.9)</td><td>9</td><td>1.4</td></tr><tr><td>Meta-Llama-3-8B-Instruct</td><td>44.1</td><td>-</td><td>-</td></tr><tr><td>Llama3-Med42-8B*</td><td>45.3 (+1.2)</td><td>5</td><td>7.8</td></tr></table>
206
+
207
+ *fine-tuned on ACI-Bench, i.e. not a few-shot setting.
208
+
209
+ Sum $(+3.3\%)$ . Med42 improves over LLaMA3 by $+1.2\%$ , mainly due to a $+27.7\%$ boost on ICD10CM as well as being fine-tuned on the trainset of ACI-Bench. However, MediPhi surpasses Med42 in relative percentages on four tasks: ICD10CM by $2.1\%$ , on RRS QA by $13.9\%$ , SDoH by $5.2\%$ , and MeDiSumCode by $47.6\%$ . Moreover, it is competitive within $1\%$ in absolute on three other tasks: MeQSum, MeDiSumQA and MEDEC. MediPhi models (3.8B) also achieve wider improvements over its base model with a $\# DG$ between 9 and 11, compared to 5 for Med42 (8B).
210
+
211
+ We summarized the overall progression of our approach in Figure 6.
212
+
213
+ # 5 Conclusion
214
+
215
+ In this work, we introduced MediPhi, the first clinical-focused SLM collection, alongside MediFlow, a large-scale synthetic instruction dataset for clinical alignment. Our results show that PIT significantly enhances domain adaptation, especially when combined with model merging. Notably, our medical coding expert surpasses GPT-4-0125 on the ICD10CM benchmark, and aligning MediPhi with MediFlow improves CLUE+ performance by $18.9\%$ on average. By releasing these resources, we aim to enhance reproducibility, drive SLM adoption in clinical settings, and foster MediPhi's continued development through its modular design.
216
+
217
+ ![](images/c1d06d7604d7375688a284474b60eef5dcd802ccc2dc716674e52b6170db692b.jpg)
218
+ Figure 6: Summary of improvements from Phi3.5 to MediPhi-Instruct (SFT+DPO). DataMix is adapted on all corpora. SLERP PIT AVG is the average of five experts trained with PIT on each dataset. MediPhi is the unified expert from the five experts. MediPhi-SFT is an instruct model based on SFT only.
219
+
220
+ # Limitations
221
+
222
+ This study necessitated a vast amount of resources in terms of 8x80GB A100 GPUs on Azure Machine Learning for approximately 12,000 GPU·hours, of which close to 3,600 GPU·hours were dedicated to achieve the final model with evaluations. It also required access to Azure OpenAI services for GPT-4o, GPT-4o mini and text-embedding-3-large (i.e. close to 25B input-output tokens).
223
+
224
+ The MediFlow corpus has few limits. The first limitation is that we set our scope to one-turn instructions instead of multi-turn conversation dataset like UltraChat (Ding et al., 2023). The second issue is the limited amount of very high complexity tasks (i.e. multi-step reasoning tasks) with also very long inputs and outputs. The third limit is that Mediflow is an English-only clinical corpus. To broaden the scope of MediFlow, future works might use seed data from clinical corpora as well as expanding our agentic pipeline. While MediPhi and MediPhi-instruct might conserve abilities from Phi3.5-mini-instruct (i.e. multilingual, conversational, safety alignment, etc.) by model merging (Hammoud et al., 2024), we hypothesize that these could be affected for which the impact was not studied outside medical and clinical abilities as well as instruction-following abilities. Thus, our recommendation is to use the MediPhi collection exclusively on clinical NLP tasks. We also strongly advice a verification of the model output by an expert in the specific medical field of the task.
225
+
226
+ Our CLUE+ benchmark expands upon the CLUE coverage in terms of tasks, input data
227
+
228
+ and datasets. While the coverage of the clinical field is large, a couple of gaps still remain. The information-extraction tasks are only represented by the SDoH dataset, and there is no input data on the nursing subfield (e.g. nurse-patient dialogs or notes).
229
+
230
+ Given the quick evolutions of OpenAI's GPT-40 models as well as their stochastic nature, future exact reproduction of parts of this work may become impossible if the mentioned versions are no longer maintained.
231
+
232
+ # Ethics Statement
233
+
234
+ We acknowledge that the substantial computational resources required for this work, including GPU hours and API calls, contribute to carbon emissions as well as limit the reproduction of this work. However, we believe that the knowledge and artifacts produced — such as the MediPhi SLM collection and the MediFlow corpus — offer significant positive impacts. These include enabling the adoption of SLMs in clinical settings, potentially reducing carbon emissions in the medium to long term, and fostering further research on continual pre-training in medical, clinical, and other domains.
235
+
236
+ Additionally, our modular approach with releases of models and datasets allows researchers and practitioners to build directly on this work, promoting accessibility and collaboration. In the future, strong language models in the clinical field can positively impact public health and support the work of healthcare providers, enhancing patient care and operational efficiency.
237
+
238
+ # References
239
+
240
+ 2019. Mtsamples. https://mtsamples.com/. Accessed: 2024-11-24.
241
+ Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. 2024a. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219.
242
+ Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J Hewett, Mojan Javaheripi, Piero Kauffmann, et al. 2024b. Phi-4 technical report. arXiv preprint arXiv:2412.08905.
243
+ Emre Can Acikgoz, Osman Batur Ince, Rayene Bech, Arda Anil Boz, Ilker Kesen, Aykut Erdem, and Erkut
244
+
245
+ Erdem. 2024. Hippocrates: An open-source framework for advancing large language models in healthcare. In GenAI for Health: Potential, Trust and Policy Compliance.
246
+ Lisa Adams, Felix Busch, Tianyu Han, Jean-Baptiste Excoffier, Matthieu Ortala, Alexander Löser, Hugo JWL Aerts, Jakob Nikolas Kather, Daniel Truhn, and Keno K Bressem. 2024. Longhealth: A question answering benchmark with long clinical documents. CoRR.
247
+ Arash Ahmadian, Seraphina Goldfarb-Tarrant, Beyza Ermis, Marzieh Fadaee, Sara Hooker, et al. 2024. Mix data or merge models? optimizing for diverse multi-task learning. In Safe Generative AI Workshop.
248
+ Malaikannan Sankarasubbu Ankit Pal. 2024. Openbiollms: Advancing open-source large language models for healthcare and life sciences. https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B.
249
+ Asma Ben Abacha and Dina Demner-Fushman. 2019. On the summarization of consumer health questions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28th - August 2.
250
+ Asma Ben Abacha, Yassine Mrabet, Mark Sharp, Travis Goodwin, Sonya E. Shooshan, and Dina Demner-Fushman. 2019. Bridging the gap between consumers' medication questions and trusted answers. In MEDINFO 2019.
251
+ Asma Ben Abacha, Yassine M'rabet, Yuhao Zhang, Chaitanya Shivade, Curtis Langlotz, and Dina Demner-Fushman. 2021. Overview of the mediaq a 2021 shared task on summarization in the medical domain. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 74-85.
252
+ Asma Ben Abacha, Wen-wai Yim, Yujuan Fu, Zhaoyi Sun, Meliha Yetisgen, Fei Xia, and Thomas Lin. 2024. Medec: A benchmark for medical error detection and correction in clinical notes. arXiv preprint arXiv:2412.19260.
253
+ Yekun Chai. 2019. eval4ner: An all-round evaluation for named entity recognition. https://github.com/cyk1337/eval4ner.
254
+ Canyu Chen, Jian Yu, Shan Chen, Che Liu, Zhongwei Wan, Danielle Bitterman, Fei Wang, and Kai Shu. 2024. Clinicalbench: Can llms beat traditional ml models in clinical prediction? In GenAI for Health: Potential, Trust and Policy Compliance.
255
+ Zeming Chen, Alejandro Hernández Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf, Amirkeivan Mohtashami, et al. 2023. Meditron-70b: Scaling medical pretraining for large language models. arXiv preprint arXiv:2311.16079.
256
+
257
+ Clement Christophe, Avani Gupta, Nasir Hayat, Praveen Kanithi, Ahmed Al-Mahrooqi, Prateek Munjal, Marco Pimentel, Tathagata Raha, Ronnie Rajan, and Shadab Khan. 2023. Med42 - a clinical large language model.
258
+ Clément Christophe, Praveen K Kanithi, Tathagata Raha, Shadab Khan, and Marco AF Pimentel. 2024a. Med42-v2: A suite of clinical llms. arXiv preprint arXiv:2408.06142.
259
+ Clement Christophe, Tathagata Raha, Svetlana Maslenkova, Muhammad Umar Salman, Praveenkumar Kanithi, Marco Pimentel, and Shadab Khan. 2024b. Beyond fine-tuning: Unleashing the potential of continuous pretraining for clinical llms. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 10549-10561.
260
+ Jean-Philippe Corbeil. 2024. Iryonlp at mediqa-corr 2024: Tackling the medical error detection & correction task on the shoulders of medical agents. In Proceedings of the 6th Clinical Natural Language Processing Workshop, pages 570-580.
261
+ Amin Dada, Osman Alperen Koras, Marie Bauer, Amanda Butler, Kaleb E Smith, Jens Kleesiek, and Julian Friedrich. 2025. Medisumqa: Patient-oriented question-answer generation from discharge letters. arXiv preprint arXiv:2502.03298.
262
+ Amin Dada, Osman Alperen Koras, Marie Bauer Contreras, Amanda Butler, Kaleb E Smith Seibold, Marc Constantin, and Jens Kleesiek. 2024. Does biomedical training lead to better medical performance? arXiv preprint arXiv:2404.04067v4.
263
+ MohammadReza Davari and Eugene Belilovsky. 2024. Model breadcrumbs: Scaling multi-task model merging with sparse masks. In European Conference on Computer Vision, pages 270-287. Springer.
264
+ Dina Demner-Fushman, Marc D Kohli, Marc B Rosenman, Sonya E Shooshan, Laritza Rodriguez, Sameer Antani, George R Thoma, and Clement J McDonald. 2016. Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association, 23(2):304-310.
265
+ Fabio Dennstadt, Janna Hastings, Paul Martin Putora, Max Schmerder, and Nikola Cihoric. 2025. Implementing large language models in healthcare while balancing control, collaboration, costs and security. npj Digital Medicine, 8(1):143.
266
+ Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, and Stefano Soatto. 2024. Fewer truncations improve language modeling. In *Forty-first International Conference on Machine Learning*.
267
+ Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. 2023. Enhancing chat language models by scaling high-quality instructional conversations.
268
+
269
+ In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3029-3051.
270
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
271
+ Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2020. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pages 3259-3269. PMLR.
272
+ Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1180-1189, Lille, France. PMLR.
273
+ Yanjun Gao, Timothy Miller, Majid Afshar, and Dmitriy Dligach. 2023. Bionlp workshop 2023 shared task 1a: Problem list summarization." In Proceedings of the 22nd Workshop on Biomedical Language Processing.
274
+ Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vladimir Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz. 2024. Arcee's MergeKit: A toolkit for merging large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 477-485, Miami, Florida, US. Association for Computational Linguistics.
275
+ Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio Cesar Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644.
276
+ Ashwin Kumar Gururajan, Enrique Lopez-Cuena, Jordi Bayarri-Planas, Adrián Tormos, Daniel Hinxos, Pablo Bernabeu-Perez, Anna Arias-Duart, Pablo Agustin Martin-Torres, Lucia Urcelay-Ganzabal, Marta Gonzalez-Mallo, et al. 2024. Aloe: A family of fine-tuned open healthcare llms. CoRR.
277
+ Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
278
+ Hasan Hammoud, Umberto Michieli, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem, and Mete Ozay. 2024. Model merging and safety alignment: One bad model spoils the bunch. In *Findings of the Association for Computational Linguistics: EMNLP* 2024, pages 13033-13046.
279
+
280
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. In International Conference on Learning Representations.
281
+ Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortman, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. In The Eleventh International Conference on Learning Representations.
282
+ Daniel Jeong, Saurabh Garg, Zachary C Lipton, and Michael Oberst. 2024a. Medical adaptation of large language and vision-language models: Are we making progress? In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 12143-12170.
283
+ Daniel P Jeong, Pranav Mani, Saurabh Garg, Zachary C Lipton, and Michael Oberst. 2024b. The limited impact of medical adaptation of large language and vision-language models. arXiv preprint arXiv:2411.08870.
284
+ Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Lin, Wen-tau Yih, and Srini Iyer. 2024. Instruction-tuned language models are better knowledge learners. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5421-5434.
285
+ Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421.
286
+ Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567-2577.
287
+ Achintya Kundu, Rhui Dih Lee, Laura Wynter, Raghu Kiran Ganti, and Mayank Mishra. 2024. Enhancing training efficiency using packing with flash attention. arXiv preprint arXiv:2407.09105.
288
+ Sunjun Kweon, Junu Kim, Jiyoun Kim, Sujeong Im, Eunbyeol Cho, Seongsu Bae, Jungwoo Oh, Gyubok Lee, Jong Hak Moon, Seng Chan You, et al. 2024. Publicly shareable clinical large language model built on synthetic clinical notes. In Findings of the Association for Computational Linguistics ACL 2024, pages 5148-5168.
289
+ Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre-Antoine Gourraud, Mickael Rouvier, and Richard Dufour. 2024. Biomistral: A collection of open-source pretrained large language models for medical domains. In *Findings of the Association for Computational Linguistics ACL* 2024, pages 5848-5864.
290
+
291
+ Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023. Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463.
292
+ Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
293
+ Fenglin Liu, Zheng Li, Hongjian Zhou, Qingyu Yin, Jingfeng Yang, Xianfeng Tang, Chen Luo, Ming Zeng, Haoming Jiang, Yifan Gao, Priyanka Nigam, Sreyashi Nag, Bing Yin, Yining Hua, Xuan Zhou, Amid Rohanian, Anshul Thakur, Lei Clifton, and David A. Clifton. 2024. Large language models are poor clinical decision-makers: A comprehensive benchmark. medRxiv.
294
+ Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. 2023. The flan collection: designing data and methods for effective instruction tuning. In Proceedings of the 40th International Conference on Machine Learning, pages 22631-22648.
295
+ Shayne Longpre, Robert Mahari, Ariel Lee, Campbell Lund, Hamidah Oderinwale, William Brannon, Nayan Saxena, Naana Obeng-Marnu, Tobin South, Cole Hunter, et al. 2024. Consent in crisis: The rapid decline of the ai data commons. In NEURIPS.
296
+ Kevin Lybarger, Meliha Yetisgen, and Ozlem Uzuner. 2023. The 2022 n2c2/uw shared task on extracting social determinants of health. Journal of the American Medical Informatics Association, 30(8):1367-1378.
297
+ Leland McInnes, John Healy, Steve Astels, et al. 2017. hdbscan: Hierarchical density based clustering. J. Open Source Softw., 2(11):205.
298
+ Seyed Iman Mirzadeh, Mehrdad Farajtabar, Dilan Gorur, Razvan Pascanu, and Hassan Ghasemzadeh. Linear mode connectivity in multitask and continual learning. In International Conference on Learning Representations.
299
+ Arindam Mitra, Luciano Del Corro, Guoqing Zheng, Shweti Mahajan, Dany Rouhana, Andres Codas, Yadong Lu, Wei-ge Chen, Olga Vrousgos, Corby Rosset, et al. 2024. Agentinstruct: Toward generative teaching with agentic flows. arXiv preprint arXiv:2407.03502.
300
+ Niklas Muennighoff, Alexander Rush, Boaz Barak, Teven Le Scao, Nouamane Tazi, Aleksandra Piktus, Sampo Pyysalo, Thomas Wolf, and Colin A Raffel. 2023. Scaling data-constrained language models. Advances in Neural Information Processing Systems, 36:50358-50376.
301
+ Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023. Orca: Progressive learning from
302
+
303
+ complex explanation traces of gpt-4. Preprint, arXiv:2306.02707.
304
+ Clara Na, Ian Magnusson, Ananya Harsh Jha, Tom Sherborne, Emma Strubell, Jesse Dodge, and Pradeep Dasigi. 2024. Scalable data ablation approximations for language models through modular training and merging. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21125-21141.
305
+ National Library of Medicine. 2003. Pmc open access subset [internet]. Bethesda (MD): National Library of Medicine.
306
+ Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolo Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, et al. 2023. Can generalist foundation models outcompete special-purpose tuning? case study in medicine. Medicine, 84(88.3):77-3.
307
+ Harsha Nori, Naoto Usuyama, Nicholas King, Scott Mayer McKinney, Xavier Fernandes, Sheng Zhang, and Eric Horvitz. 2024. From medprompt to o1: Exploration of run-time strategies for medical challenge problems and beyond. arXiv preprint arXiv:2411.03590.
308
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
309
+ Ankit Pal, Logesh Kumar Umapathi, and Malaikanan Sankarasubbu. 2022. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Conference on health, inference, and learning, pages 248-260. PMLR.
310
+ Alexandre Ramé, Johan Ferret, Nino Vieillard, Robert Dadashi, Léonard Hussenot, Pierre-Louis Cedoz, Pier Giuseppe Sessa, Sertan Girgin, Arthur Douillard, and Olivier Bachem. 2024. Warp: On the benefits of weight averaged rewarded policies. CoRR.
311
+ Alexey Romanov and Chaitanya Shivade. 2018. Lessons from natural language inference in the clinical domain. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1586-1596, Brussels, Belgium. Association for Computational Linguistics.
312
+ Nikhil Sardana, Jacob Portes, Sasha Doubov, and Jonathan Frankle. 2024. Beyond chinchilla-optimal: accounting for inference in language model scaling laws. In Proceedings of the 41st International Conference on Machine Learning, pages 43445-43460.
313
+ Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. 2022. Fine-tuned language models are continual learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6107-6122.
314
+
315
+ Ofir Ben Shoham and Nadav Rappoport. 2024. Med-conceptsqa: Open source medical concepts qa benchmark. Computers in Biology and Medicine, 182:109089.
316
+ Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature, 620(7972):172-180.
317
+ Augustin Toma, Patrick R Lawler, Jimmy Ba, Rahul G Krishnan, Barry B Rubin, and Bo Wang. 2023. Clinical camel: An open-source expert-level medical language model with dialogue-based knowledge encoding. CoRR.
318
+ Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip Torr, Adel Bibi, Samuel Albanie, and Matthias Bethge. 2024. No" zero-shot without exponential data: Pretraining concept frequency determines multimodal model performance. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
319
+ Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, and An Chang Ho. 2022. Will we run out of data? limits of llm scaling based on human-generated data.
320
+ Junda Wang, Zonghai Yao, Zhichao Yang, Huixue Zhou, Rumeng Li, Xun Wang, Yucheng Xu, and Hong Yu. 2023. Notechat: a dataset of synthetic doctor-patient conversations conditioned on clinical notes. arXiv preprint arXiv:2310.15959.
321
+ Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. 2022. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International conference on machine learning, pages 23965-23998. PMLR.
322
+ C Wu, W Lin, X Zhang, Y Zhang, W Xie, and Y Wang. 2024. Pmc-llama: toward building open-source language models for medicine. Journal of the American Medical Informatics Association: JAMIA, pages ocae045-ocae045.
323
+ Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, and Bill Yuchen Lin. 2024. Magpie: Alignment data synthesis from scratch by prompting aligned llms with nothing. CoRR.
324
+ Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. 2024a. Ties-merging: Resolving interference when merging models. Advances in Neural Information Processing Systems, 36.
325
+ Prateek Yadav, Tu Vu, Jonathan Lai, Alexandra Chronopoulou, Manaal Faruqui, Mohit Bansal, and Tsendsuren Munkhdalai. 2024b. What matters
326
+
327
+ for model merging at scale? arXiv preprint arXiv:2410.03617.
328
+
329
+ Rui Yang, Ting Fang Tan, Wei Lu, Arun James Thirunavukarasu, Daniel Shu Wei Ting, and Nan Liu. 2023. Large language models in health care: Development, applications, and challenges. Health Care Science, 2(4):255-263.
330
+
331
+ Wen-wai Yim, Yujuan Fu, Asma Ben Abacha, Neal Snider, Thomas Lin, and Meliha Yetisgen. 2023. Acibench: a novel ambient clinical intelligence dataset for benchmarking automatic visit note generation. Scientific Data, 10(1):586.
332
+
333
+ Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In *Forty-first International Conference on Machine Learning*.
334
+
335
+ Kai Zhang, Rong Zhou, Eashan Adhikarla, Zhiling Yan, Yixin Liu, Jun Yu, Zhengliang Liu, Xun Chen, Brian D Davison, Hui Ren, et al. 2024. A generalist vision-language foundation model for diverse biomedical tasks. Nature Medicine, pages 1-13.
336
+
337
+ Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, et al. 2023a. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792.
338
+
339
+ Xinlu Zhang, Chenxin Tian, Xianjun Yang, Lichang Chen, Zekun Li, and Linda Ruth Petzold. 2023b. Alpacare: Instruction-tuned large language models for medical application. CoRR.
340
+
341
+ Zhengyun Zhao, Qiao Jin, Fangyuan Chen, Tuorui Peng, and Sheng Yu. 2023. A large-scale dataset of patient summaries for retrieval-based clinical decision support systems. Scientific data, 10 1:909.
342
+
343
+ # A Appendix
344
+
345
+ # A.1 Information on Dataset Groups
346
+
347
+ The PubMed group is the largest by 2 orders of magnitude with about 48B tokens. PubMed Central (National Library of Medicine, 2003) have a segment for commercial use, while abstracts are public.
348
+
349
+ On the general medical side, we have medical Wikipedia known as MedWiki (Corbeil, 2024). We also fetch the open medical Guidelines (Chen et al., 2023) which comes from multiple recognized health organizations (e.g. WHO). Then, we gather medical coding as MedCode corpora from public websites: ICD9CM, ICD9PROC, ICD10CM, ICD10PROC and ATC. The ICD coding is a wide taxonomy of diseases and medical conditions as
350
+
351
+ well as procedure codes, while Anatomical Therapeutic Chemical (ATC) codes are a classification of medical drugs.
352
+
353
+ In the Clinical group, we leverage PMC-Patients v2 (Zhao et al., 2023) — subset with distribution- and commercial-friendly CC licenses — which contains clinical cases. We also include synthetic derivative datasets under same licensing conditions: Asclepius (Kweon et al., 2024) containing clinical documents and NoteChat (Wang et al., 2023) containing doctor-patient dialogues. Furthermore, we include MTSamples (mts, 2019), a public database filled with various de-identified clinical documents.
354
+
355
+ # A.2 Hyperparameters & Pre-Training Optimization Strategies
356
+
357
+ We use the HuggingFace ecosystem (transformers,TRL,accelerate,and datasets) to train models leveraging a multi-node configuration. The hyperparameters of the continual pre-training are listed in Table 6. The hyperparameters of the alignment process are provided in Tables 7 and 8 for SFT and DPO, respectively.
358
+
359
+ Ding et al. (2024) highlighted the importance of longer context windows, minimal truncations, and restricted cross-document attention during pretraining. Many prior works in medical NLP rely on concatenate-and-truncate strategies, despite introducing significant truncation (Toma et al., 2023; Christophe et al., 2023, 2024a; Labrak et al., 2024). Instead, we implement Best-Fit Packing (Ding et al., 2024) for the PubMed expert and DataMix baseline, ensuring efficient token utilization. The other experts are trained with PIT one document at a time. For all expert models, we avoid padding and cross-document attention (Kundu et al., 2024)
360
+
361
+ Table 6: Hyperparameters for CPT excluding PIT.
362
+
363
+ <table><tr><td colspan="2">Hyperparameter</td></tr><tr><td>Maximum Tokens</td><td>4,096</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr><tr><td>LR Scheduler</td><td>Linear Warmup - Cosine</td></tr><tr><td>#Warmup Steps</td><td>35</td></tr><tr><td>Maximum LR</td><td>1 × 10-4</td></tr><tr><td>Epochs</td><td>1</td></tr><tr><td>NEFTune α</td><td>5</td></tr><tr><td>Batch Size per GPU</td><td>16</td></tr><tr><td># GPUs</td><td>16</td></tr><tr><td>Gradient Accumulation</td><td>2</td></tr><tr><td>Effective Batch Size</td><td>512</td></tr></table>
364
+
365
+ in trainings.
366
+
367
+ # A.3 Validation Sets
368
+
369
+ The twelve validation sets cover subjects such as clinical case, clinical knowledge, medication, ICD10CM code definitions, radiology report, clinical NLI, QA on discharge letter, medical codes from discharge letter, problem list from clinical notes, summarization of patient inquiry, QA on medical consultation and QA on multiple EHR documents. We control its diversity by generating the embeddings of the question with its context, and applying the density-based clustering method HDBSCAN (McInnes et al., 2017). The final valid sets are the combination of the outliers (i.e. not assigned to a cluster) and one sample per cluster for a total up to 1,200 samples each.
370
+
371
+ # A.4 Generation of MediFlow triplets
372
+
373
+ We parametrized the prompt to generate MediFlow based on the task type, input data, output format (plain text or JSON), difficulty level (moderate, moderate-hard, hard, very hard, or extreme), and number of input-output example pairs (3 or 4 per instruction). For the difficulty level, we favored the sampling of hard, very hard and extreme levels by a ratio of 3:1.
374
+
375
+ The 14 task types are: summarization, question-answering, multiple-choice question-answering, named entity recognition, relation extraction, classification, reasoning & diagnosis, textual entailment, text simplification, text expansion, abbreviation expansion, aspect-oriented keyword extraction, error detection & correction, and note scoring.
376
+
377
+ The 36 input-data types with various levels of granularity (1 (only complete document) up to 7 (complete document with 6 individual sections))
378
+
379
+ Table 7: Hyperparameters for SFT.
380
+
381
+ <table><tr><td colspan="2">Hyperparameter</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr><tr><td>LR Scheduler</td><td>Linear Warmup - Cosine</td></tr><tr><td>#Warmup Steps</td><td>40</td></tr><tr><td>Maximum LR</td><td>2 × 10-5</td></tr><tr><td>Epochs</td><td>2</td></tr><tr><td>NEFTune α</td><td>5</td></tr><tr><td>Batch Size per GPU</td><td>16</td></tr><tr><td># GPUs</td><td>8</td></tr><tr><td>Gradient Accumulation</td><td>2</td></tr><tr><td>Effective Batch Size</td><td>256</td></tr></table>
382
+
383
+ Table 8: Hyperparameters for DPO.
384
+
385
+ <table><tr><td colspan="2">Hyperparameter</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr><tr><td>LR Scheduler</td><td>Linear Warmup - Cosine</td></tr><tr><td>#Warmup Steps</td><td>50</td></tr><tr><td>Maximum LR</td><td>1 × 10-6</td></tr><tr><td>Epoch</td><td>1</td></tr><tr><td>Batch Size per GPU</td><td>8</td></tr><tr><td># GPUs</td><td>16</td></tr><tr><td>Gradient Accumulation</td><td>1</td></tr><tr><td>Effective Batch Size</td><td>128</td></tr><tr><td>β</td><td>0.1</td></tr></table>
386
+
387
+ resulting in a total of 98 fine-grained input-data types. Decisions for the granularity levels were made based on the document length — e.g. taking the complete short documents compared to segmenting long documents. The data types are:
388
+
389
+ 1. Discharge Summary (6)
390
+ 2. SOAP Clinical Note (5)
391
+ 3. Clinical Note (6)
392
+ 4. Progress Note (4)
393
+ 5. Admission Note (6)
394
+ 6. Scientific Article (8)
395
+ 7. Clinical Case (3)
396
+ 8. Nursing Note (1)
397
+ 9. Monitoring Data of Vital Signs (1)
398
+ 0. Referral Letter (1)
399
+ 11. Emergency Department Note (7)
400
+ 12. Laboratory Report (1)
401
+ 13. Radiology Report (3)
402
+ 14. Doctor-Patient Conversation (5)
403
+ 15. Nurse-Patient Dialog (5)
404
+ 16. Operative Note (8)
405
+ 17. Consultation Note (1)
406
+ 18. Pathology Report (1)
407
+ 19. Prescription Note (1)
408
+ 20. Preoperative Assessment (1)
409
+ 21. Postoperative Note (1)
410
+ 22. Therapy Notes (5)
411
+ 23. Immunization Record (1)
412
+ 24. Screening Report (1)
413
+ 25. Consent Form (1)
414
+ 26. Care Plan (1)
415
+ 27. Dietary Notes (1)
416
+ 28. Psychiatric Evaluation (5)
417
+ 29. Social Work Note (1)
418
+ 30. End-of-Life Care Documentation (1)
419
+ 31. Triage Note (1)
420
+ 32. Dental Record (1)
421
+
422
+ 33. Home Health Care Report (1)
423
+ 34. Genetic Testing Report (1)
424
+ 35. Incident Report (1)
425
+ 36. Patient Education Material (1)
426
+
427
+ In Figures 7, 8 and 9, we present the histograms of tokens for generated instructions, inputs and outputs, respectively.
428
+
429
+ ![](images/2b0448bb2d1bcd72ce42eae3c7f0411393704f40814924b28575d33940e7d384.jpg)
430
+ Figure 7: Distribution of instruction tokens in MediFlow with y-axis in log scale. The average is $301 \pm 295$ tokens.
431
+
432
+ ![](images/3fefaf5c234894d4dbc21db259e083e50ebb43c7bc36898cd3340031cce95e48.jpg)
433
+ Figure 8: Distribution of input tokens in MediFlow with y-axis in log scale. The average is $76 \pm 67$ tokens.
434
+
435
+ ![](images/6f0dde99dba2d7c201739068ccb2e2855462bc24e8e63d75be089e9e95609b90.jpg)
436
+ Figure 9: Distribution of output tokens in MediFlow with y-axis in log scale. The average is $79 \pm 66$ tokens.
437
+
438
+ # MediFlow Prompt
439
+
440
+ You are an expert user querying about the medical and clinical domain in natural language processing. You will define instructions for a precise task with clear constraints in the medical/clinical domain. You must be very detailed in the instructions regarding input data, the {{task_type}} task and the desired output format which is {{output_format}}. You must use {{input_data}} as the task's input in some ways, especially {{input_datagranular}}. You must put these between the tags $<\text{instruction}>...$ <instruction>. You must define clearly based on these parameters a specific task from the given type, specific expected input data and output format. You must make a task with a {{difficulty}} difficulty on a scale of 6 levels (low, moderate, moderate-hard, hard, very hard and extreme). You must not mention the task level in your own instructions. You must only write the instructions, i.e. do not use markdown, no extra comment, etc.
441
+
442
+ Then, you must give {{number/examples}} examples between tags <examples> <example> <input>...</input> <output>...</output> </example> containing input and output. You must give a complete example with input and output at the end.
443
+
444
+ You must use interesting and complex examples requiring abstractive medical capabilities to infer the output from the input. So, you must avoid any obvious input and output, and you must favor very difficult pairs. You must strictly use the format <instruction>...</instruction> followed by <examples> <example> <input>...</input> <output>...</output> </example> </examples> with exactly those tag names.
445
+
446
+ You must use synonyms for all headers (or no header at all) to avoid leaking current vocabulary into the instructions, also use different ways to structure (or not) and detail the instructions (e.g. bullet points, sections, narrative form, or else).
447
+
448
+ # A.5 Generation of MediFlow Judge Scores
449
+
450
+ We used GPT-4o-mini to score all MediFlow on 5 criteria: quality, alignment with instruction requirements, coherence, realism and difficulty. We applied a temperature of 1.0 with chain-of-thought for each criterion and average scores across 5 generated samples for self-consistency. All scores are an integer on a scale from 1 to 4 as defined below.
451
+
452
+ # LLM-as-a-Judge MediFlow Prompt
453
+
454
+ You are the best instruction designer for language models in the medical/clinical field. You will be given instructions with constraints for a task to perform on clinical documents. Your task is to give a critical assessment of the instructions as a nested JSON object. For each criteria as a key, the value is a JSON object containing a "rationale" along a "score" on a scale of 1 to 4. The criteria are: quality, alignment with instruction requirements, coherence, realism and difficulty.
455
+
456
+ INSTRUCTION REQUIREMENTS: Here are the instruction requirements:
457
+
458
+ - Defined very detailed instructions for a precise task with clear constraints in the medical/clinical domain.
459
+
460
+ - Clearly defined task type, specific input data and output format. It can also contain examples.
461
+
462
+ - Only write the instructions, i.e. do not use markdown, no extra comment, etc.
463
+
464
+ SCALES::
465
+
466
+ {{criteriadefinitions}}
467
+
468
+ INSTRUCTIONS::
469
+
470
+ {{instruction}}
471
+
472
+ You must only output the JSON object for the critical
473
+
474
+ assessment. Remember valid scores are: 1, 2, 3 and 4.
475
+
476
+ OUTPUT::
477
+
478
+ # A.5.1 Judge Score Distributions
479
+
480
+ In Figure 10, we displayed the histograms (with a logarithm y axis) of the judge scores predicted on nearly 700k unique instructions. For all criteria, we note that a large amount of generated instructions have a peak at 3 with large distribution between 3 and 4 (on a scale from 1 to 4), which is considered on the high end. We observe a different trend with respect to the difficulty criterion, where a larger peak is still at 3 (i.e. difficulty level of hard) but some portion of the dataset (i.e. less than a 100k instruction) have scores between 2 and 3.
481
+
482
+ ![](images/db8f5ebf9ccbcc9bd5cd570ec87ace73d983d23f9201dc781effe5da50278056.jpg)
483
+
484
+ ![](images/1a5148119c4aeb435da18c118b32575d2b94ebef5bfaff07f6ef3ed087745f47.jpg)
485
+
486
+ ![](images/a2148da20f767e7cdb81f023725c6b6f8eecd5d5fbbe14277a5b059b11eea0e9.jpg)
487
+
488
+ ![](images/b2a629072ee14171a9b240067ab7c74ca7eab880d50373a0a9e7109f5610a22b.jpg)
489
+
490
+ ![](images/ee980633431577d8ad07ae93f2a80e09e3e2d656b943b47f215b96649f86a0af.jpg)
491
+ Figure 10: Judge Score Distributions across MediFlow. Y-axis is set to log scale.
492
+
493
+ # A.6 MediFlow Scatterplots
494
+
495
+ We generated embedding with OpenAI text-embedding-3-large (truncated at 256 dimensions) for each instruction in MediFlow. Then, we applied PCA at 50 dimensions and t-SNE to get 2 dimensions which we displayed on scatterplots with
496
+
497
+ other labels (output formats, difficulty levels, task type and input-data type) in Figures 13, 11 and 12. While the scatter plot on the difficulty levels do not exhibit clear clustering patterns, we see that the others have distinctive patterns at different scales. The output format is affecting local clustering patterns often appearing at small scale as two closed blobs. The task and input-data types affect the macrostructure at similar large scale.
498
+
499
+ # A.7 Generation of Marginally Wrong Outputs for DPO
500
+
501
+ DPO requires to have a prompt (corresponding in our framework to instruction with input) along a chosen response (i.e. output) and a rejected response. The rejected response must be less preferable compared to the chosen one. First, we filter MediFlow to around 85k instructions keeping only the top triplets on the quality metric along with a stratification by task type, input-data type and output format. To have the best trade-off between diversity and high quality, we sampled 3 input-output pairs for the first top high-quality 20k instructions, followed by 2 pairs for the next 25k and one pair for the last 40k. After filtering, this resulted the MediPhi-DPO dataset of 130,852 triplets. Then, we prompted GPT-4o-0806 with a triplet at a temperature of 1.0 with a randomly sampled error type on the following prompt to generate a marginally wrong output as rejected output. The error types are : ambiguity, partial correctness, over-verbosity, brevity, unbalanced detail, stylistic issues, factual inaccuracy, logical flaws, misinterpretation, simplistic reasoning, grammatical errors, and spelling errors.
502
+
503
+ # Marginally Worst Output Prompt
504
+
505
+ You are a subtle flaw introducer, trained to degrade a high-quality response by introducing a specific type of error without making it obviously incorrect. You will receive the triplet (instructions, input, output). The instructions are what to do to the input to get the output, which is the good response. Your task is to provide a wrong output but in a subtle way. Here's the definitions of error types:
506
+
507
+ {{error_typedefinitions}}
508
+
509
+ INSTRUCTIONS {{instruction}}
510
+
511
+ INPUT{input}
512
+
513
+ OUTPUT {{output}}
514
+
515
+ Generate a wrong response that is marginally worse than the good output. Introduce an error of type $\{\{\mathrm{error\_type}\}\}$ , ensuring the response still seems reasonable at first glance.
516
+
517
+ ![](images/c27c16bee6ed199af84c518dba30389c0ce01088f27f03fdcc91cbe37013f5ef.jpg)
518
+ Figure 11: t-SNE 2D scatterplot of MediFlow (2.5M) using OpenAI text-embedding-3 large API with tasks as colors.
519
+
520
+ ![](images/98afa19dcb01724a80532f1a8942da018d1424e1b1bdc762992ff73eb925c3cd.jpg)
521
+ Figure 12: t-SNE 2D scatterplot of MediFlow (2.5M) using OpenAI text-embedding-3 large API with input-data types as colors.
522
+
523
+ ![](images/e9dc3a086968051b7d6aaff0b6b9673812c793811e2b8b3ce8a26a0125870f0c.jpg)
524
+ Figure 13: t-SNE 2D scatterplots of MediFlow (2.5M) with output format (top) and difficulty level (bottom) as colors.
525
+
526
+ # A.8 CLUE+ Benchmark
527
+
528
+ # A.8.1 MedicationQA
529
+
530
+ MedicationQA (Ben Abacha et al., 2019) consists of 674 consumer health questions collected from MedlinePlus<sup>8</sup>. These questions were linked to a matching excerpt from a trusted source that contains the answer.
531
+
532
+ Through manual inspection we identified two obstacles for effective few-shot evaluation of LLMs on MedicationQA:
533
+
534
+ - Some questions were poorly formulated or a search query instead of a question
535
+ - Some of the answers did not give a specific answer to the question
536
+ - Answers were often unconcise since they were not formulated as a direct answer to the question, but a retrieved excerpt from a text
537
+
538
+ Based on these observations we prompted an $\mathrm{LLM}^9$ to remove non-matching question answer pairs and formulate a direct answer based on the given excerpt. Figure 14 shows an example for a reformulated answer. This process resulted in 485 question-answer pairs.
539
+
540
+ For benchmarking we use the following system prompt:
541
+
542
+ # MedicationQA System Prompt
543
+
544
+ You are a highly skilled assistant, specifically trained to assist patients with medical questions. Give a concise answer. Do not mention anything that was not explicitly asked for. Do not generate anything else.
545
+
546
+ # A.8.2 MEDIQA-RRS QA
547
+
548
+ MeDIQA-RRS (Ben Abacha et al., 2021) is a summarization dataset based on findings and impressions sections of Radiology reports from the Indiana University chest X-ray dataset (Demner-Fushman et al., 2016) and reports from the Stanford Health Care system. The findings serve as model inputs while the impressions are treated as the summarization ground truth. We evaluate on the test split, which contains 600 finding-impression pairs. We observed that the impressions were often not a complete summary, but rather an answer to a question posed before the exam (e.g., "No acute cardiopulmonary findings"). Additionally, the comprehensiveness of impressions varied substantially between reports. To address this we reformulated the impressions to a series of question-answer pairs using an LLM<sup>9</sup>. We specified that the answers should use the same wording as in the original impression section to ensure factuality. We confirmed that answers were not changed, filtering answers with exact string matching. Figure 15 shows an example of this reformulation step.
549
+
550
+ The following system prompt was used in the evaluation:
551
+
552
+ # MEDIQA-RRS QA System Prompt
553
+
554
+ You are a highly skilled assistant, specifically trained to interpret radiology reports. You will receive the findings section of a report along with specific questions. Provide concise, focused answers based solely on the information provided. Avoid adding any details not explicitly requested.
555
+
556
+ # A.8.3 ACI-Bench
557
+
558
+ ACI-Bench (Yim et al., 2023) is a dataset that takes as input a doctor-patient dialog and the task is to generate a clinical note of five sections.
559
+
560
+ # ACI-Bench System Prompt
561
+
562
+ Task: Generate an Extremely Detailed Clinical Note from a Doctor-Patient Conversation
563
+
564
+ Role: You are an expert medical professional responsible for creating a highly detailed, comprehensive, and fully elaborated clinical note from a doctor-patient conversation.
565
+
566
+ Instructions: - You will receive a doctor-patient conversation as input. - Your task is to produce a long, exhaustive clinical note covering all clinically relevant details. - The note must be extremely detailed, ensuring no important information is omitted. - Expand on every symptom, examination finding, and treatment plan to provide a complete, structured summary.
567
+
568
+ # Question
569
+
570
+ is it nessesary to ween off of cymbalta before starting effexor?
571
+
572
+ # Original Answer
573
+
574
+ Switching from one antidepressant to another is frequently indicated due to an inadequate treatment response or unacceptable adverse effects. All antidepressant switches must be carried out cautiously and under close observation.
575
+
576
+ Conservative switching strategies involve gradually tapering the first antidepressant followed by an adequate washout period before the new antidepressant is started. This can take a long time and include periods of no treatment with the risk of potentially life-threatening exacerbations of illness.
577
+
578
+ Clinical expertise is needed for more rapid or cross-taper switching as drug toxicity, including serotonin syndrome, may result from inappropriate co-administration of antidepressants. Some antidepressants must not be combined.
579
+
580
+ Antidepressants can cause withdrawal syndromes if discontinued abruptly after prolonged use. Relapse and exacerbation of depression can also occur.
581
+
582
+ Gradual dose reduction over days to weeks reduces the risk and severity of complications.
583
+
584
+ # Reformulated Answer
585
+
586
+ Yes, it is necessary to taper off Cymbalta before starting Effexor to avoid potential complications, including withdrawal syndromes and serotonin syndrome. A gradual dose reduction over days to weeks is recommended to reduce the risk and severity of complications.
587
+
588
+ Figure 14: An example of a verbose answer in MedicationQA and the reformulated answer we replaced it with.
589
+
590
+ # Findings
591
+
592
+ Fracture deformity proximal right humerus. Hyperinflation lungs. No pulmonary consolidation. opacity left base compatible atelectasis or scarring. The cardiomediastinal silhouette appears unremarkable. Mild atherosclerotic
593
+
594
+ calcification aorta. Prior chest surgery. Costophrenic clear. Visualized spine vertebrae appear normal in and alignment.
595
+
596
+ # Original Sample
597
+
598
+ # Impression
599
+
600
+ Fracture deformity proximal right humerus. No pulmonary consolidation
601
+
602
+ # Reformulated QA
603
+
604
+ # Question
605
+
606
+ What is noted in the proximal right humerus?
607
+
608
+ # Answer
609
+
610
+ Fracture deformity.
611
+
612
+ # Question
613
+
614
+ Is there any pulmonary consolidation?
615
+
616
+ # Answer
617
+
618
+ No.
619
+
620
+ Figure 15: An example of a formulation of questions based on the impression section.
621
+
622
+ Output Format: Your clinical note must be structured into the following five required sections:
623
+ 1. CHIEF COMPLAINT A precise and natural-language statement describing the primary reason for the visit. Usually written in the patient's own words (e.g., "Chest pain for two days.")
624
+ 2. HISTORY OF PRESENT ILLNESS Provide a detailed, full narrative including: - Onset: Exact time course (sudden/gradual, exact duration). - Duration: Progression over time. - Severity: Patient's description or numeric scale (1-10). - Location: Anatomical specificity. - Modifying Factors: What worsens or improves symptoms (activities, medications). - Associated Symptoms: Describe all related symptoms. - Prior Treatments: List exact medications, dosages, patient responses.
625
+ 3. PHYSICAL EXAM Expand on every finding instead of using short labels. Describe specific observations for each system: - Vital Signs: BP, HR, Temp, O2 Sat, RR. - General Appearance: Patient's demeanor, level of distress. - Neurological: Reflexes, motor strength, sensory findings. - Cardiovascular: Detailed heart sounds, pulses, peripheral findings. - Pulmonary: Breath sounds, presence of wheezes/crackles. - Abdominal: Bowel sounds, tenderness, distension.
626
+ 4. RESULTS Include all relevant diagnostic data, explaining why each result matters. List both abnormal and pertinent normal findings.
627
+ 5. ASSESSMENT AND PLAN (A/P) Diagnosis & Differential Diagnosis: Explain why the most likely condition was chosen. Plan:
628
+ - Medications: Name, dose, frequency, and rationale. - Additional Tests: Imaging, lab work, specialist referrals. - Follow-up Plan: Next steps, expected outcomes. - Patient Education: Instructions, lifestyle modifications. - Ensure that every plan component is justified.
629
+ Reminder:
630
+ - You must include every detail exhaustively from the dialog.
631
+ - You must justify every diagnosis and treatment.
632
+ - You must provide a thorough explanation of the assessment and plan.
633
+ Thus, you must expand on every detail from the dialog in the note.
634
+
635
+ # A.8.4 Social Determinants of Health
636
+
637
+ # SDoH System Prompt
638
+
639
+ Task: Entity Extraction for Social Determinants of Health (SDOH) Your job is to extract key socio-demographic and behavioral factors from the provided text. Your output must be a list of JSON objects, where each object contains:
640
+ - "entity": The specific category of the extracted information, chosen from the predefined taxonomy below. - "label": The corresponding label from the taxonomy. - "value": The exact phrase from the document that represents the entity.
641
+ Entities & Taxonomy:
642
+ 1. Employment
643
+ - "entity": "Employment" (general employment-related mention)
644
+ - "label": "StatusEmploy" $\rightarrow$ "employed", "unemployed", "retired",
645
+ "on disability", "student", "homemaker"
646
+ - "label": "Duration" $\rightarrow$ "for the last five years", "since 2010"
647
+ - "label": "History" $\rightarrow$ "15 years ago", "in 2005"
648
+ - "label": "Type" $\rightarrow$ Specific occupations (e.g., "geologist", "registered nurse", "office work")
649
+ 2. Living Status
650
+ - "entity": "LivingStatus" (mentions of where and how someone lives)
651
+ - "label": "StatusTime" $\rightarrow$ "current", "past", "future"
652
+ - "label": "TypeLiving" $\rightarrow$ "alone", "with family", "with others", "homeless"
653
+ - "label": "Duration" $\rightarrow$ "for the past ten years", "since 2015"
654
+ - "label": "History" → "moved out five years ago", "in 2010"
655
+ 3. Substance Use
656
+ - "entity": "Alcohol", "Drug", "Tobacco" (mentions of substance use)
657
+ - "label": "StatusTime" $\rightarrow$ "none", "current", "past"
658
+ - "label": "Duration" $\rightarrow$ "for the past eight years"
659
+ - "label": "History" $\rightarrow$ "seven years ago", "in 2005"
660
+ "label": "Method" $\rightarrow$ "smoke", "snort", "inhale", "inject" (for drugs), "chew", "vape" (for tobacco)
661
+ "label": "Type" $\rightarrow$ "beer", "wine", "heroin", "marijuana", "cigarettes"
662
+ - "label": "Amount" $\rightarrow$ "#of drinks", "#of cigarettes", "#of times"
663
+ - "label": "Frequency" $\rightarrow$ "daily", "monthly", "yearly"
664
+ Example Output (Using Entities from the Document): [ "entity":
665
+ "Employment", "label": "StatusEmploy", "value": "full-time student", "entity": "LivingStatus", "label": "TypeLiving", "value":
666
+ "currently lives alone", "entity": "Alcohol", "label": "StatusTime", "value": "drinks occasionally", "entity": "Drug", "label": "History", "value": "used marijuana seven years ago"]
667
+ Guidelines for Extraction:
668
+
669
+ 1. Extract only explicitly mentioned entities—do not infer information.
670
+ 2. Use exact text from the document—the "value" must match the original wording.
671
+ 3. Categorize precisely—select the most appropriate "entity" and "label".
672
+ 4. Ensure valid JSON format—return structured, machine-readable output.
673
+ Your final output should be a list of JSON objects containing only the entities present in the document.
674
+
675
+ # A.8.5 MEDEC
676
+
677
+ # MEDEC System Prompt
678
+
679
+ Task: Medical Error Detection in Clinical Text Role: You are an expert medical reviewer analyzing clinical text for accuracy. Your task is to determine whether there is one medical error in the provided text.
680
+ Input Format: The input consists of multiple sentences. Each sentence starts with a Sentence ID, followed by the sentence itself. Sentences are formatted one per line with a space separating the ID and the sentence text.
681
+ Types of Errors to Detect: Diagnosis errors (incorrect or conflicting diagnoses) Treatment errors (inappropriate or missing treatments) Management errors (incorrect clinical decision-making) Causation errors (incorrect understanding of disease causes or progression)
682
+ Output Format: If one sentence contains a medical error, return only the Sentence ID of that sentence. If no errors are found, return "-1" (without quotes). You must not provide any explanation, only the Sentence ID or -1.
683
+ Example with an error: input :
684
+ 0 The patient was diagnosed with bacterial pneumonia and prescribed amoxicillin.
685
+ 1 The recommended treatment for viral pneumonia is antibiotics.
686
+ 2 The patient showed signs of improvement after three days. output:1
687
+ Example without error:
688
+ input:0 The patient was diagnosed with bacterial pneumonia and prescribed amoxicillin.
689
+ 1 The recommended treatment for bacterial pneumonia is antibiotics.
690
+ 2 The patient showed signs of improvement after three days. output:-1
691
+
692
+ Table 9: The configuration used for CLUE+ subset datasets of the benchmark regarding few shots and metric. The decoding strategy is a greedy decoding for all.
693
+
694
+ <table><tr><td>Dataset Name</td><td>Few-Shots</td><td>Metric</td></tr><tr><td>ICD10CM</td><td>3</td><td>AVG Accuracy</td></tr><tr><td>MedicationQA</td><td>3</td><td>Rouge-1 F1 (Lin, 2004)</td></tr><tr><td>RRS QA</td><td>3</td><td>Rouge-1 F1 (Lin, 2004)</td></tr><tr><td>SDoH</td><td>4</td><td>Type-Match F1 w/ boundary overlap (Chai, 2019)</td></tr><tr><td>ACI-Bench</td><td>1</td><td>Rouge-1 F1 (Lin, 2004)</td></tr><tr><td>MEDEC</td><td>2</td><td>Sentence ID Accuracy</td></tr></table>
695
+
696
+ Table 10: CLUE+ benchmark datasets with task types, input, and output specifications. Task types are: question-answering (QA), summarization, reasoning and information extraction (IE).
697
+
698
+ <table><tr><td></td><td>Task</td><td>Dataset Name</td><td>Input</td><td>Output</td></tr><tr><td rowspan="6">CLUE</td><td>NLI</td><td>MedNLI (Romanov and Shivade, 2018)</td><td>Premise + Hypothesis</td><td>Label</td></tr><tr><td>Summary</td><td>MeQSum (Ben Abacha and Demner-Fushman, 2019)</td><td>Consumer Health Question</td><td>Summary question</td></tr><tr><td>Summary</td><td>Problem list summarization (Gao et al., 2023)</td><td>Progress notes</td><td>Problem list</td></tr><tr><td>QA</td><td>LongHealth (Adams et al., 2024)</td><td>Clinical records + Question</td><td>Answer</td></tr><tr><td>QA</td><td>MeDiSumQA (Dada et al., 2025)</td><td>Discharge letter + Questions</td><td>Answers</td></tr><tr><td>Reasoning</td><td>MeDiSumCode (Dada et al., 2024)</td><td>Discharge letter</td><td>ICD10CM codes</td></tr><tr><td rowspan="6">CLUE+</td><td>QA</td><td>MedConceptsQA ICD10CM (Shoham and Rappoport, 2024)</td><td>Code + definition options</td><td>Answer</td></tr><tr><td>QA</td><td>MedicationQA (Ben Abacha et al., 2019)</td><td>Question on medication</td><td>Answer</td></tr><tr><td>QA</td><td>MEDIQA-RRS QA (Ben Abacha et al., 2021)</td><td>Findings + Questions</td><td>Answers</td></tr><tr><td>IE</td><td>SDoH (Lybarger et al., 2023)</td><td>Clinical note</td><td>Entities</td></tr><tr><td>Summary</td><td>ACI-Bench (Yim et al., 2023)</td><td>Doctor-patient dialog</td><td>Clinical note</td></tr><tr><td>Reasoning</td><td>MEDEC (Ben Abacha et al., 2024)</td><td>Clinical note</td><td>Error detection</td></tr></table>
699
+
700
+ Table 11: Performances on the CLUE subset of datasets for the merged and aligned versions of MediPhi as well as other medical LLMs. PLS stands for Problem List Summary while LH refers to LongHealth.
701
+
702
+ <table><tr><td></td><td></td><td>MedNLI</td><td>PLS</td><td>MeQSum</td><td>LH</td><td>MeDiSumQA</td><td>MeDiSumCode</td></tr><tr><td>Baseline</td><td>Phi-3.5-mini-instruct</td><td>66.6</td><td>28.4</td><td>36.7</td><td>45.9</td><td>25.9</td><td>41.1</td></tr><tr><td rowspan="6">SLERP</td><td>DataMix</td><td>68.5</td><td>29.0</td><td>37.7</td><td>45.7</td><td>26.6</td><td>41.4</td></tr><tr><td>PubMed</td><td>68.3</td><td>29.2</td><td>37.6</td><td>45.7</td><td>26.3</td><td>41.0</td></tr><tr><td>Clinical</td><td>69.2</td><td>29.4</td><td>38.1</td><td>43.5</td><td>26.7</td><td>40.5</td></tr><tr><td>MedWiki</td><td>72.8</td><td>29.2</td><td>37.6</td><td>43.6</td><td>25.1</td><td>41.7</td></tr><tr><td>MedCode</td><td>68.5</td><td>22.3</td><td>33.5</td><td>45.7</td><td>23.6</td><td>39.0</td></tr><tr><td>Guideline</td><td>70.3</td><td>29.8</td><td>37.6</td><td>41.1</td><td>25.1</td><td>41.9</td></tr><tr><td>BreadCrumbs</td><td>MediPhi</td><td>66.9</td><td>28.8</td><td>37.9</td><td>45.7</td><td>26.1</td><td>41.7</td></tr><tr><td rowspan="2">MediFlow</td><td>MediPhi-SFT</td><td>70.6</td><td>26.9</td><td>42.8</td><td>44.2</td><td>28.8</td><td>35.0</td></tr><tr><td>MediPhi-Instruct</td><td>71.0</td><td>26.0</td><td>42.8</td><td>45.0</td><td>29.1</td><td>37.2</td></tr><tr><td rowspan="4">Other Medical LLMs</td><td>Mistral-7B-Instruct-v0.1</td><td>64.8</td><td>25.0</td><td>31.1</td><td>30.0</td><td>25.5</td><td>13.9</td></tr><tr><td>BioMistral-7B-DARE</td><td>66.8</td><td>28.4</td><td>34.5</td><td>30.5</td><td>25.7</td><td>21.3</td></tr><tr><td>Meta-Llama-3-8B-Instruct</td><td>74.1</td><td>31.6</td><td>39.5</td><td>58.8</td><td>30.3</td><td>27.8</td></tr><tr><td>Llama3-Med42-8B</td><td>77.5</td><td>32.4</td><td>42.8</td><td>57.9</td><td>29.7</td><td>25.2</td></tr></table>
703
+
704
+ Table 12: Performances on the new CLUE+ subset of datasets for the merged and aligned versions of MediPhi as well as other medical LLMs. ACI refers to ACI-Bench.
705
+
706
+ <table><tr><td></td><td></td><td>RRS QA</td><td>MedicationQA</td><td>MEDEC</td><td>ACI</td><td>SDoH</td><td>ICD10CM</td></tr><tr><td>Baseline</td><td>Phi-3.5-mini-instruct</td><td>41.2</td><td>11.2</td><td>14.8</td><td>42.3</td><td>35.1</td><td>49.3</td></tr><tr><td rowspan="6">SLERP</td><td>DataMix</td><td>43.3</td><td>10.8</td><td>18.8</td><td>42.7</td><td>36.2</td><td>49.5</td></tr><tr><td>PubMed</td><td>44.1</td><td>10.3</td><td>22.2</td><td>42.7</td><td>35.8</td><td>49.5</td></tr><tr><td>Clinical</td><td>52.1</td><td>12.0</td><td>34.5</td><td>43.9</td><td>35.8</td><td>49.6</td></tr><tr><td>MedWiki</td><td>46.7</td><td>12.2</td><td>28.8</td><td>44.7</td><td>43.6</td><td>50.2</td></tr><tr><td>MedCode</td><td>45.6</td><td>12.0</td><td>18.1</td><td>39.0</td><td>24.8</td><td>68.7</td></tr><tr><td>Guideline</td><td>48.9</td><td>11.9</td><td>28.3</td><td>44.7</td><td>41.0</td><td>49.8</td></tr><tr><td>BreadCrumbs</td><td>MediPhi</td><td>44.5</td><td>11.3</td><td>29.1</td><td>44.3</td><td>39.7</td><td>55.5</td></tr><tr><td rowspan="2">MediFlow</td><td>MediPhi-SFT</td><td>60.8</td><td>18.8</td><td>35.0</td><td>43.4</td><td>54.5</td><td>54.9</td></tr><tr><td>MediPhi-Instruct</td><td>61.6</td><td>19.3</td><td>34.4</td><td>43.5</td><td>56.7</td><td>54.9</td></tr><tr><td rowspan="4">Other Medical LLMs</td><td>Mistral-7B-Instruct-v0.1</td><td>50.4</td><td>22.7</td><td>21.5</td><td>50.4</td><td>40.2</td><td>27.6</td></tr><tr><td>BioMistral-7B-DARE</td><td>49.6</td><td>22.3</td><td>23.1</td><td>43.3</td><td>45.9</td><td>25.1</td></tr><tr><td>Meta-Llama-3-8B-Instruct</td><td>55.8</td><td>26.1</td><td>46.5</td><td>50.2</td><td>63.1</td><td>25.7</td></tr><tr><td>Llama3-Med42-8B</td><td>54.1</td><td>25.7</td><td>35.4</td><td>56.5</td><td>53.9</td><td>53.4</td></tr></table>
707
+
708
+ Table 13: Performances of MediPhi models on multiple-choice question-answering medical benchmarks.
709
+
710
+ <table><tr><td></td><td>MedQA</td><td>MedMCQA</td><td>PubMedQA</td><td>MMLU-med</td><td>AVG</td></tr><tr><td>Phi-3.5-mini-instruct</td><td>0.486</td><td>0.554</td><td>0.768</td><td>0.715</td><td>0.631</td></tr><tr><td>PubMed</td><td>0.467</td><td>0.549</td><td>0.774</td><td>0.724</td><td>0.629</td></tr><tr><td>Clinical</td><td>0.500</td><td>0.551</td><td>0.772</td><td>0.727</td><td>0.638</td></tr><tr><td>Medical</td><td>0.519</td><td>0.535</td><td>0.740</td><td>0.701</td><td>0.624</td></tr><tr><td>MedWiki</td><td>0.478</td><td>0.548</td><td>0.768</td><td>0.719</td><td>0.628</td></tr><tr><td>MedCode</td><td>0.531</td><td>0.532</td><td>0.758</td><td>0.700</td><td>0.630</td></tr><tr><td>Guideline</td><td>0.459</td><td>0.556</td><td>0.770</td><td>0.724</td><td>0.627</td></tr><tr><td>MediPhi</td><td>0.491</td><td>0.559</td><td>0.766</td><td>0.720</td><td>0.634</td></tr><tr><td>MediPhi-SFT</td><td>0.536</td><td>0.552</td><td>0.766</td><td>0.716</td><td>0.642</td></tr><tr><td>MediPhi-Instruct</td><td>0.548</td><td>0.555</td><td>0.764</td><td>0.714</td><td>0.645</td></tr></table>
2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26864aa19778ed173fe36bd908bb073f993ca88b9eb7feeed2b7369a63706190
3
+ size 1310006
2025/A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/729add8e-e0d4-485c-88d2-ddbf2ab27215_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/729add8e-e0d4-485c-88d2-ddbf2ab27215_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/729add8e-e0d4-485c-88d2-ddbf2ab27215_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91b6815d9f2fa61604ddade690d00ad72cfa48c8f4e0d09606dac699b741bd1c
3
+ size 871207
2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57f670200cbf025502ae3a062aa545f07cfb304f6a11c4d60ef81ceee9826116
3
+ size 1584552
2025/A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/A Multi-persona Framework for Argument Quality Assessment/2481c86c-70af-47d5-be02-6418fbf3c386_content_list.json ADDED
The diff for this file is too large to render. See raw diff