Chelsea707 commited on
Commit
ea5e649
·
verified ·
1 Parent(s): eed4281

Add Batch e58499d3-1bc7-4afc-af66-a561db899aa4 data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/32930ba9-d0ab-4ef5-992e-d5a190ee6d2d_content_list.json +0 -0
  3. 2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/32930ba9-d0ab-4ef5-992e-d5a190ee6d2d_model.json +0 -0
  4. 2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/32930ba9-d0ab-4ef5-992e-d5a190ee6d2d_origin.pdf +3 -0
  5. 2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/full.md +672 -0
  6. 2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/images.zip +3 -0
  7. 2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/layout.json +0 -0
  8. 2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/2d212304-cc0a-4f05-ba87-fdea6e7e1bf1_content_list.json +0 -0
  9. 2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/2d212304-cc0a-4f05-ba87-fdea6e7e1bf1_model.json +0 -0
  10. 2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/2d212304-cc0a-4f05-ba87-fdea6e7e1bf1_origin.pdf +3 -0
  11. 2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/full.md +0 -0
  12. 2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/images.zip +3 -0
  13. 2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/layout.json +0 -0
  14. 2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/d1ecf317-cf42-4c20-9ffa-b0e6f009c87a_content_list.json +0 -0
  15. 2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/d1ecf317-cf42-4c20-9ffa-b0e6f009c87a_model.json +0 -0
  16. 2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/d1ecf317-cf42-4c20-9ffa-b0e6f009c87a_origin.pdf +3 -0
  17. 2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/full.md +645 -0
  18. 2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/images.zip +3 -0
  19. 2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/layout.json +0 -0
  20. 2025/DELOC_ Document Element Localizer/b1ff4230-9685-4b29-9a1f-525a8461a4fb_content_list.json +1804 -0
  21. 2025/DELOC_ Document Element Localizer/b1ff4230-9685-4b29-9a1f-525a8461a4fb_model.json +2585 -0
  22. 2025/DELOC_ Document Element Localizer/b1ff4230-9685-4b29-9a1f-525a8461a4fb_origin.pdf +3 -0
  23. 2025/DELOC_ Document Element Localizer/full.md +405 -0
  24. 2025/DELOC_ Document Element Localizer/images.zip +3 -0
  25. 2025/DELOC_ Document Element Localizer/layout.json +0 -0
  26. 2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/53d4ebbb-1b33-4bbd-9f27-62b7ab1bc458_content_list.json +0 -0
  27. 2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/53d4ebbb-1b33-4bbd-9f27-62b7ab1bc458_model.json +0 -0
  28. 2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/53d4ebbb-1b33-4bbd-9f27-62b7ab1bc458_origin.pdf +3 -0
  29. 2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/full.md +1269 -0
  30. 2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/images.zip +3 -0
  31. 2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/layout.json +0 -0
  32. 2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/997032d8-3fb5-4fb3-b06b-f18b26ee4b5c_content_list.json +0 -0
  33. 2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/997032d8-3fb5-4fb3-b06b-f18b26ee4b5c_model.json +0 -0
  34. 2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/997032d8-3fb5-4fb3-b06b-f18b26ee4b5c_origin.pdf +3 -0
  35. 2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/full.md +669 -0
  36. 2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/images.zip +3 -0
  37. 2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/layout.json +0 -0
  38. 2025/DINT Transformer/0d93eff8-c6dd-408c-9936-774d22532538_content_list.json +1505 -0
  39. 2025/DINT Transformer/0d93eff8-c6dd-408c-9936-774d22532538_model.json +1835 -0
  40. 2025/DINT Transformer/0d93eff8-c6dd-408c-9936-774d22532538_origin.pdf +3 -0
  41. 2025/DINT Transformer/full.md +289 -0
  42. 2025/DINT Transformer/images.zip +3 -0
  43. 2025/DINT Transformer/layout.json +0 -0
  44. 2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/ee33bf7a-a428-47bd-8acb-35c3dc3733a8_content_list.json +0 -0
  45. 2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/ee33bf7a-a428-47bd-8acb-35c3dc3733a8_model.json +0 -0
  46. 2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/ee33bf7a-a428-47bd-8acb-35c3dc3733a8_origin.pdf +3 -0
  47. 2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/full.md +0 -0
  48. 2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/images.zip +3 -0
  49. 2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/layout.json +0 -0
  50. 2025/DMDTEval_ An Evaluation and Analysis of LLMs on Disambiguation in Multi-domain Translation/ce78ff04-a550-4881-a64f-5059d60592ac_content_list.json +0 -0
.gitattributes CHANGED
@@ -5231,3 +5231,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
5231
  2025/QUARTZ_[[:space:]]QA-based[[:space:]]Unsupervised[[:space:]]Abstractive[[:space:]]Refinement[[:space:]]for[[:space:]]Task-oriented[[:space:]]Dialogue[[:space:]]Summarization/33ff17bb-f79e-4140-ad65-5695e75eeede_origin.pdf filter=lfs diff=lfs merge=lfs -text
5232
  2025/QUITO-X_[[:space:]]A[[:space:]]New[[:space:]]Perspective[[:space:]]on[[:space:]]Context[[:space:]]Compression[[:space:]]from[[:space:]]the[[:space:]]Information[[:space:]]Bottleneck[[:space:]]Theory/4ac3b2d1-e6d6-471c-8bcc-facd83571d4c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5233
  2025/QuantAgents_[[:space:]]Towards[[:space:]]Multi-agent[[:space:]]Financial[[:space:]]System[[:space:]]via[[:space:]]Simulated[[:space:]]Trading/da80e027-80c9-45eb-91d3-81b5c9192053_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5231
  2025/QUARTZ_[[:space:]]QA-based[[:space:]]Unsupervised[[:space:]]Abstractive[[:space:]]Refinement[[:space:]]for[[:space:]]Task-oriented[[:space:]]Dialogue[[:space:]]Summarization/33ff17bb-f79e-4140-ad65-5695e75eeede_origin.pdf filter=lfs diff=lfs merge=lfs -text
5232
  2025/QUITO-X_[[:space:]]A[[:space:]]New[[:space:]]Perspective[[:space:]]on[[:space:]]Context[[:space:]]Compression[[:space:]]from[[:space:]]the[[:space:]]Information[[:space:]]Bottleneck[[:space:]]Theory/4ac3b2d1-e6d6-471c-8bcc-facd83571d4c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5233
  2025/QuantAgents_[[:space:]]Towards[[:space:]]Multi-agent[[:space:]]Financial[[:space:]]System[[:space:]]via[[:space:]]Simulated[[:space:]]Trading/da80e027-80c9-45eb-91d3-81b5c9192053_origin.pdf filter=lfs diff=lfs merge=lfs -text
5234
+ 2025/DDO_[[:space:]]Dual-Decision[[:space:]]Optimization[[:space:]]for[[:space:]]LLM-Based[[:space:]]Medical[[:space:]]Consultation[[:space:]]via[[:space:]]Multi-Agent[[:space:]]Collaboration/32930ba9-d0ab-4ef5-992e-d5a190ee6d2d_origin.pdf filter=lfs diff=lfs merge=lfs -text
5235
+ 2025/DEBATE,[[:space:]]TRAIN,[[:space:]]EVOLVE_[[:space:]]Self‐Evolution[[:space:]]of[[:space:]]Language[[:space:]]Model[[:space:]]Reasoning/2d212304-cc0a-4f05-ba87-fdea6e7e1bf1_origin.pdf filter=lfs diff=lfs merge=lfs -text
5236
+ 2025/DEL-ToM_[[:space:]]Inference-Time[[:space:]]Scaling[[:space:]]for[[:space:]]Theory-of-Mind[[:space:]]Reasoning[[:space:]]via[[:space:]]Dynamic[[:space:]]Epistemic[[:space:]]Logic/d1ecf317-cf42-4c20-9ffa-b0e6f009c87a_origin.pdf filter=lfs diff=lfs merge=lfs -text
5237
+ 2025/DELOC_[[:space:]]Document[[:space:]]Element[[:space:]]Localizer/b1ff4230-9685-4b29-9a1f-525a8461a4fb_origin.pdf filter=lfs diff=lfs merge=lfs -text
5238
+ 2025/DICE_[[:space:]]Structured[[:space:]]Reasoning[[:space:]]in[[:space:]]LLMs[[:space:]]through[[:space:]]SLM-Guided[[:space:]]Chain-of-Thought[[:space:]]Correction/53d4ebbb-1b33-4bbd-9f27-62b7ab1bc458_origin.pdf filter=lfs diff=lfs merge=lfs -text
5239
+ 2025/DIDS_[[:space:]]Domain[[:space:]]Impact-aware[[:space:]]Data[[:space:]]Sampling[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Training/997032d8-3fb5-4fb3-b06b-f18b26ee4b5c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5240
+ 2025/DINT[[:space:]]Transformer/0d93eff8-c6dd-408c-9936-774d22532538_origin.pdf filter=lfs diff=lfs merge=lfs -text
5241
+ 2025/DIWALI[[:space:]]-[[:space:]]Diversity[[:space:]]and[[:space:]]Inclusivity[[:space:]]aWare[[:space:]]cuLture[[:space:]]specific[[:space:]]Items[[:space:]]for[[:space:]]India_[[:space:]]Dataset[[:space:]]and[[:space:]]Assessment[[:space:]]of[[:space:]]LLMs[[:space:]]for[[:space:]]Cultural[[:space:]]Text[[:space:]]Adaptation[[:space:]]in[[:space:]]Indian[[:space:]]Context/ee33bf7a-a428-47bd-8acb-35c3dc3733a8_origin.pdf filter=lfs diff=lfs merge=lfs -text
5242
+ 2025/DMDTEval_[[:space:]]An[[:space:]]Evaluation[[:space:]]and[[:space:]]Analysis[[:space:]]of[[:space:]]LLMs[[:space:]]on[[:space:]]Disambiguation[[:space:]]in[[:space:]]Multi-domain[[:space:]]Translation/ce78ff04-a550-4881-a64f-5059d60592ac_origin.pdf filter=lfs diff=lfs merge=lfs -text
5243
+ 2025/DPED_[[:space:]]Multi-Layer[[:space:]]Noise[[:space:]]Distillation[[:space:]]for[[:space:]]Privacy-Preserving[[:space:]]Text[[:space:]]Embeddings/d21cf23c-44e1-48f8-a769-8253218284de_origin.pdf filter=lfs diff=lfs merge=lfs -text
5244
+ 2025/DRES_[[:space:]]Fake[[:space:]]news[[:space:]]detection[[:space:]]by[[:space:]]dynamic[[:space:]]representation[[:space:]]and[[:space:]]ensemble[[:space:]]selection/4bf9da8d-0112-477e-ba26-351e930cc8b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
5245
+ 2025/DRISHTIKON_[[:space:]]A[[:space:]]Multimodal[[:space:]]Multilingual[[:space:]]Benchmark[[:space:]]for[[:space:]]Testing[[:space:]]Language[[:space:]]Models’[[:space:]]Understanding[[:space:]]on[[:space:]]Indian[[:space:]]Culture/35cd1e67-b874-4bb3-b776-362d65c4bff2_origin.pdf filter=lfs diff=lfs merge=lfs -text
5246
+ 2025/DSCD_[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Detoxification[[:space:]]with[[:space:]]Self-Constrained[[:space:]]Decoding/316e381f-af71-42c5-84c3-9c2aeefe3aee_origin.pdf filter=lfs diff=lfs merge=lfs -text
5247
+ 2025/DSG-MCTS_[[:space:]]A[[:space:]]Dynamic[[:space:]]Strategy-Guided[[:space:]]Monte[[:space:]]Carlo[[:space:]]Tree[[:space:]]Search[[:space:]]for[[:space:]]Diversified[[:space:]]Reasoning[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/52688ba7-4186-4ef9-bf55-0266efa2cdf0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5248
+ 2025/DSMoE_[[:space:]]Matrix-Partitioned[[:space:]]Experts[[:space:]]with[[:space:]]Dynamic[[:space:]]Routing[[:space:]]for[[:space:]]Computation-Efficient[[:space:]]Dense[[:space:]]LLMs/85717972-ae12-4797-9812-4b45298bb743_origin.pdf filter=lfs diff=lfs merge=lfs -text
5249
+ 2025/DSVD_[[:space:]]Dynamic[[:space:]]Self-Verify[[:space:]]Decoding[[:space:]]for[[:space:]]Faithful[[:space:]]Generation[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/2ffe497e-d863-4b26-b87f-6303c6eee2f0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5250
+ 2025/Data[[:space:]]Descriptions[[:space:]]from[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Influence[[:space:]]Estimation/c9f90fcb-f66b-4339-91b4-64372574d6ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
5251
+ 2025/Data[[:space:]]Drives[[:space:]]Unstable[[:space:]]Hierarchical[[:space:]]Generalization[[:space:]]in[[:space:]]LMs/5171725e-85f8-4fed-bcad-2b713335b53c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5252
+ 2025/Data[[:space:]]to[[:space:]]Defense_[[:space:]]The[[:space:]]Role[[:space:]]of[[:space:]]Curation[[:space:]]in[[:space:]]Aligning[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Against[[:space:]]Safety[[:space:]]Compromise/c6bddff5-2820-422d-b680-bac083b876b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
5253
+ 2025/Data-Efficient[[:space:]]Hate[[:space:]]Speech[[:space:]]Detection[[:space:]]via[[:space:]]Cross-Lingual[[:space:]]Nearest[[:space:]]Neighbor[[:space:]]Retrieval[[:space:]]with[[:space:]]Limited[[:space:]]Labeled[[:space:]]Data/5320402e-afdb-4013-bfd9-ea587ae073b0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5254
+ 2025/Data-Efficient[[:space:]]Selection[[:space:]]via[[:space:]]Grammatical[[:space:]]Complexity[[:space:]]in[[:space:]]Continual[[:space:]]Pre-training[[:space:]]of[[:space:]]Domain-Specific[[:space:]]LLMs/497879d8-a966-47d7-9398-5e6273e9bb33_origin.pdf filter=lfs diff=lfs merge=lfs -text
5255
+ 2025/Database-Augmented[[:space:]]Query[[:space:]]Representation[[:space:]]for[[:space:]]Information[[:space:]]Retrieval/edc10ab2-d72b-45d9-af2a-06b932b22771_origin.pdf filter=lfs diff=lfs merge=lfs -text
5256
+ 2025/DatawiseAgent_[[:space:]]A[[:space:]]Notebook-Centric[[:space:]]LLM[[:space:]]Agent[[:space:]]Framework[[:space:]]for[[:space:]]Adaptive[[:space:]]and[[:space:]]Robust[[:space:]]Data[[:space:]]Science[[:space:]]Automation/8a2faaab-ee91-4d60-97f0-c53839487600_origin.pdf filter=lfs diff=lfs merge=lfs -text
5257
+ 2025/Date[[:space:]]Fragments_[[:space:]]A[[:space:]]Hidden[[:space:]]Bottleneck[[:space:]]of[[:space:]]Tokenization[[:space:]]for[[:space:]]Temporal[[:space:]]Reasoning/4307ab46-8ff0-44ad-921a-c78a7d2d6ecb_origin.pdf filter=lfs diff=lfs merge=lfs -text
5258
+ 2025/Debatable[[:space:]]Intelligence_[[:space:]]Benchmarking[[:space:]]LLM[[:space:]]Judges[[:space:]]via[[:space:]]Debate[[:space:]]Speech[[:space:]]Evaluation/01322f6c-2124-4b4b-9121-af43f44b01b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
5259
+ 2025/Debate-to-Detect_[[:space:]]Reformulating[[:space:]]Misinformation[[:space:]]Detection[[:space:]]as[[:space:]]a[[:space:]]Real-World[[:space:]]Debate[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/6dbb9b1c-224a-485a-acde-0363fc6b0292_origin.pdf filter=lfs diff=lfs merge=lfs -text
5260
+ 2025/Debiasing[[:space:]]Multilingual[[:space:]]LLMs[[:space:]]in[[:space:]]Cross-lingual[[:space:]]Latent[[:space:]]Space/93839c73-3b17-4d3b-a1e7-5fcf8dd60667_origin.pdf filter=lfs diff=lfs merge=lfs -text
5261
+ 2025/Decoding[[:space:]]Dense[[:space:]]Embeddings_[[:space:]]Sparse[[:space:]]Autoencoders[[:space:]]for[[:space:]]Interpreting[[:space:]]and[[:space:]]Discretizing[[:space:]]Dense[[:space:]]Retrieval/d77dfd83-7ca4-4528-9e4d-a9448f0beda3_origin.pdf filter=lfs diff=lfs merge=lfs -text
5262
+ 2025/Decoding[[:space:]]the[[:space:]]Rule[[:space:]]Book_[[:space:]]Extracting[[:space:]]Hidden[[:space:]]Moderation[[:space:]]Criteria[[:space:]]from[[:space:]]Reddit[[:space:]]Communities/95a5e064-78eb-4072-ae3d-e30dd66d190a_origin.pdf filter=lfs diff=lfs merge=lfs -text
5263
+ 2025/DecoupleSearch_[[:space:]]Decouple[[:space:]]Planning[[:space:]]and[[:space:]]Search[[:space:]]via[[:space:]]Hierarchical[[:space:]]Reward[[:space:]]Modeling/747799fd-491f-47bc-9e8e-558095a1459b_origin.pdf filter=lfs diff=lfs merge=lfs -text
5264
+ 2025/Deep[[:space:]]Associations,[[:space:]]High[[:space:]]Creativity_[[:space:]]A[[:space:]]Simple[[:space:]]yet[[:space:]]Effective[[:space:]]Metric[[:space:]]for[[:space:]]Evaluating[[:space:]]Large[[:space:]]Language[[:space:]]Models/f38c3c4f-bc30-46b2-806e-279db36ddfae_origin.pdf filter=lfs diff=lfs merge=lfs -text
5265
+ 2025/DeepResearcher_[[:space:]]Scaling[[:space:]]Deep[[:space:]]Research[[:space:]]via[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]in[[:space:]]Real-world[[:space:]]Environments/a0679f74-4053-4f07-b6f2-684adeef1475_origin.pdf filter=lfs diff=lfs merge=lfs -text
5266
+ 2025/DeepResonance_[[:space:]]Enhancing[[:space:]]Multimodal[[:space:]]Music[[:space:]]Understanding[[:space:]]via[[:space:]]Music-centric[[:space:]]Multi-way[[:space:]]Instruction[[:space:]]Tuning/f105c119-e3bb-462e-beb2-e3276954879c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5267
+ 2025/DeepWell-Adol_[[:space:]]A[[:space:]]Scalable[[:space:]]Expert-Based[[:space:]]Dialogue[[:space:]]Corpus[[:space:]]for[[:space:]]Adolescent[[:space:]]Positive[[:space:]]Mental[[:space:]]Health[[:space:]]and[[:space:]]Wellbeing[[:space:]]Promotion/51739fb4-3d72-4320-9444-0fd44d20cbb7_origin.pdf filter=lfs diff=lfs merge=lfs -text
5268
+ 2025/Definition[[:space:]]Generation[[:space:]]for[[:space:]]Word[[:space:]]Meaning[[:space:]]Modeling_[[:space:]]Monolingual,[[:space:]]Multilingual,[[:space:]]and[[:space:]]Cross-Lingual[[:space:]]Perspectives/ef243055-5b8d-400d-a39d-962845a2a46c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5269
+ 2025/Demystifying[[:space:]]Domain-adaptive[[:space:]]Post-training[[:space:]]for[[:space:]]Financial[[:space:]]LLMs/b1dc4725-6006-48d1-963e-694798ddbcff_origin.pdf filter=lfs diff=lfs merge=lfs -text
5270
+ 2025/Demystifying[[:space:]]Synthetic[[:space:]]Data[[:space:]]in[[:space:]]LLM[[:space:]]Pre-training_[[:space:]]A[[:space:]]Systematic[[:space:]]Study[[:space:]]of[[:space:]]Scaling[[:space:]]Laws,[[:space:]]Benefits,[[:space:]]and[[:space:]]Pitfalls/4161c0b4-4451-4848-bb53-9eb29b9cf335_origin.pdf filter=lfs diff=lfs merge=lfs -text
5271
+ 2025/Demystifying[[:space:]]optimized[[:space:]]prompts[[:space:]]in[[:space:]]language[[:space:]]models/aa6dc902-d779-4d55-b522-24fc65012d72_origin.pdf filter=lfs diff=lfs merge=lfs -text
5272
+ 2025/Deriving[[:space:]]Strategic[[:space:]]Market[[:space:]]Insights[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models_[[:space:]]A[[:space:]]Benchmark[[:space:]]for[[:space:]]Forward[[:space:]]Counterfactual[[:space:]]Generation/172a88b5-3e12-4c97-8ca3-9a21d0df1d82_origin.pdf filter=lfs diff=lfs merge=lfs -text
5273
+ 2025/Detecting[[:space:]]Corpus-Level[[:space:]]Knowledge[[:space:]]Inconsistencies[[:space:]]in[[:space:]]Wikipedia[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/ad6379ca-50d1-455b-abf6-0cac5a977669_origin.pdf filter=lfs diff=lfs merge=lfs -text
5274
+ 2025/Detecting[[:space:]]Knowledge[[:space:]]Boundary[[:space:]]of[[:space:]]Vision[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]by[[:space:]]Sampling-Based[[:space:]]Inference/9a4c8025-b0d8-4f02-9a4e-2d75dcfebc2e_origin.pdf filter=lfs diff=lfs merge=lfs -text
5275
+ 2025/Detecting[[:space:]]LLM[[:space:]]Hallucination[[:space:]]Through[[:space:]]Layer-wise[[:space:]]Information[[:space:]]Deficiency_[[:space:]]Analysis[[:space:]]of[[:space:]]Ambiguous[[:space:]]Prompts[[:space:]]and[[:space:]]Unanswerable[[:space:]]Questions/45fda613-e59a-49ae-a113-f7ccad036099_origin.pdf filter=lfs diff=lfs merge=lfs -text
5276
+ 2025/Detecting[[:space:]]Legal[[:space:]]Citations[[:space:]]in[[:space:]]United[[:space:]]Kingdom[[:space:]]Court[[:space:]]Judgments/9f1ac364-738b-4586-a89a-c4ce4ee7aead_origin.pdf filter=lfs diff=lfs merge=lfs -text
5277
+ 2025/Detoxifying[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]via[[:space:]]the[[:space:]]Diversity[[:space:]]of[[:space:]]Toxic[[:space:]]Samples/e00fa7b6-685b-4b82-8f54-cc01126d95bf_origin.pdf filter=lfs diff=lfs merge=lfs -text
5278
+ 2025/DiCoRe_[[:space:]]Enhancing[[:space:]]Zero-shot[[:space:]]Event[[:space:]]Detection[[:space:]]via[[:space:]]Divergent-Convergent[[:space:]]LLM[[:space:]]Reasoning/72d7f401-4703-457c-b01c-91e162aaa804_origin.pdf filter=lfs diff=lfs merge=lfs -text
5279
+ 2025/DiMo-GUI_[[:space:]]Advancing[[:space:]]Test-time[[:space:]]Scaling[[:space:]]in[[:space:]]GUI[[:space:]]Grounding[[:space:]]via[[:space:]]Modality-Aware[[:space:]]Visual[[:space:]]Reasoning/8e8416c4-77d8-4853-9530-51a31b643650_origin.pdf filter=lfs diff=lfs merge=lfs -text
5280
+ 2025/DiNaM_[[:space:]]Disinformation[[:space:]]Narrative[[:space:]]Mining[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/c717d0a7-e18c-406f-9120-d16ae5b06555_origin.pdf filter=lfs diff=lfs merge=lfs -text
5281
+ 2025/Diagnosing[[:space:]]Memorization[[:space:]]in[[:space:]]Chain-of-Thought[[:space:]]Reasoning,[[:space:]]One[[:space:]]Token[[:space:]]at[[:space:]]a[[:space:]]Time/7d820ed1-9e24-41f8-aec7-15909d9103b6_origin.pdf filter=lfs diff=lfs merge=lfs -text
5282
+ 2025/Diagram-Driven[[:space:]]Course[[:space:]]Questions[[:space:]]Generation/cc7396c9-1ab9-4594-b21b-2c026719aa0d_origin.pdf filter=lfs diff=lfs merge=lfs -text
5283
+ 2025/Dial-In[[:space:]]LLM_[[:space:]]Human-Aligned[[:space:]]LLM-in-the-loop[[:space:]]Intent[[:space:]]Clustering[[:space:]]for[[:space:]]Customer[[:space:]]Service[[:space:]]Dialogues/21ecdc0b-d65c-4aa6-890b-0e695a0b9ea0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5284
+ 2025/Dialect-SQL_[[:space:]]An[[:space:]]Adaptive[[:space:]]Framework[[:space:]]for[[:space:]]Bridging[[:space:]]the[[:space:]]Dialect[[:space:]]Gap[[:space:]]in[[:space:]]Text-to-SQL/d29b881b-2952-4975-b47d-300b8b8ed222_origin.pdf filter=lfs diff=lfs merge=lfs -text
5285
+ 2025/Diffusion[[:space:]]vs.[[:space:]]Autoregressive[[:space:]]Language[[:space:]]Models_[[:space:]]A[[:space:]]Text[[:space:]]Embedding[[:space:]]Perspective/60977452-ae23-40b1-a877-d05656184f93_origin.pdf filter=lfs diff=lfs merge=lfs -text
5286
+ 2025/DiffusionAttacker_[[:space:]]Diffusion-Driven[[:space:]]Prompt[[:space:]]Manipulation[[:space:]]for[[:space:]]LLM[[:space:]]Jailbreak/ed8b0dd8-e297-45f5-b141-d69ae149ddb9_origin.pdf filter=lfs diff=lfs merge=lfs -text
5287
+ 2025/DiplomacyAgent_[[:space:]]Do[[:space:]]LLMs[[:space:]]Balance[[:space:]]Interests[[:space:]]and[[:space:]]Ethical[[:space:]]Principles[[:space:]]in[[:space:]]International[[:space:]]Events_/d7d8e13a-c0ec-4296-9834-3f6c75f162a7_origin.pdf filter=lfs diff=lfs merge=lfs -text
5288
+ 2025/Dipper_[[:space:]]Diversity[[:space:]]in[[:space:]]Prompts[[:space:]]for[[:space:]]Producing[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Ensembles[[:space:]]in[[:space:]]Reasoning[[:space:]]Tasks/e48c23bb-815c-4c27-a870-3f7f7ec5c47a_origin.pdf filter=lfs diff=lfs merge=lfs -text
5289
+ 2025/Direct[[:space:]]Judgement[[:space:]]Preference[[:space:]]Optimization/3a7768a2-bbe4-4940-a63f-80fb83b0e98c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5290
+ 2025/Direct[[:space:]]Value[[:space:]]Optimization_[[:space:]]Improving[[:space:]]Chain-of-Thought[[:space:]]Reasoning[[:space:]]in[[:space:]]LLMs[[:space:]]with[[:space:]]Refined[[:space:]]Values/9694d18e-fd94-42aa-b932-9403784054eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
5291
+ 2025/DisLoRA_[[:space:]]Task-specific[[:space:]]Low-Rank[[:space:]]Adaptation[[:space:]]via[[:space:]]Orthogonal[[:space:]]Basis[[:space:]]from[[:space:]]Singular[[:space:]]Value[[:space:]]Decomposition/04012108-fc3d-4bf8-b8c1-4b898118363f_origin.pdf filter=lfs diff=lfs merge=lfs -text
5292
+ 2025/Disambiguation[[:space:]]in[[:space:]]Conversational[[:space:]]Question[[:space:]]Answering[[:space:]]in[[:space:]]the[[:space:]]Era[[:space:]]of[[:space:]]LLMs[[:space:]]and[[:space:]]Agents_[[:space:]]A[[:space:]]Survey/64cfa933-04f4-413f-87c6-02a1a3ca3f50_origin.pdf filter=lfs diff=lfs merge=lfs -text
5293
+ 2025/DischargeSim_[[:space:]]A[[:space:]]Simulation[[:space:]]Benchmark[[:space:]]for[[:space:]]Educational[[:space:]]Doctor–Patient[[:space:]]Communication[[:space:]]at[[:space:]]Discharge/0ceaf0c7-7a70-4e07-bc3b-a69135e53c97_origin.pdf filter=lfs diff=lfs merge=lfs -text
5294
+ 2025/DiscoSG_[[:space:]]Towards[[:space:]]Discourse-Level[[:space:]]Text[[:space:]]Scene[[:space:]]Graph[[:space:]]Parsing[[:space:]]through[[:space:]]Iterative[[:space:]]Graph[[:space:]]Refinement/382770df-90af-480a-907b-cd60524cd4db_origin.pdf filter=lfs diff=lfs merge=lfs -text
5295
+ 2025/Discourse-Driven[[:space:]]Code-Switching_[[:space:]]Analyzing[[:space:]]the[[:space:]]Role[[:space:]]of[[:space:]]Content[[:space:]]and[[:space:]]Communicative[[:space:]]Function[[:space:]]in[[:space:]]Spanish-English[[:space:]]Bilingual[[:space:]]Speech/d6f92695-39bd-4e0d-ae17-dfa73385f0a7_origin.pdf filter=lfs diff=lfs merge=lfs -text
5296
+ 2025/Discrepancy[[:space:]]Detection[[:space:]]at[[:space:]]the[[:space:]]Data[[:space:]]Level_[[:space:]]Toward[[:space:]]Consistent[[:space:]]Multilingual[[:space:]]Question[[:space:]]Answering/8cb6f579-5061-4806-a8f6-ea2c5782c81c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5297
+ 2025/Discriminating[[:space:]]Form[[:space:]]and[[:space:]]Meaning[[:space:]]in[[:space:]]Multilingual[[:space:]]Models[[:space:]]with[[:space:]]Minimal-Pair[[:space:]]ABX[[:space:]]Tasks/80eb5847-375d-4c13-be9e-34fb95d5aed8_origin.pdf filter=lfs diff=lfs merge=lfs -text
2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/32930ba9-d0ab-4ef5-992e-d5a190ee6d2d_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/32930ba9-d0ab-4ef5-992e-d5a190ee6d2d_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/32930ba9-d0ab-4ef5-992e-d5a190ee6d2d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:322cfe73e6b377a4a8a38921045921434b826a0bc07f0d53766d2072b3877969
3
+ size 568131
2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/full.md ADDED
@@ -0,0 +1,672 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DDO: Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration
2
+
3
+ Zhihao Jia $^{1}$ , Mingyi Jia $^{1}$ , Junwen Duan $^{1*}$ , Jianxin Wang $^{1}$
4
+
5
+ <sup>1</sup>Hunan Provincial Key Lab on Bioinformatics,
6
+
7
+ School of Computer Science and Engineering, Central South University
8
+
9
+ {zhihaojia, jiamingyi, jwduan}@csu.edu.cn, jxwang@mail.csu.edu.cn
10
+
11
+ # Abstract
12
+
13
+ Large Language Models (LLMs) demonstrate strong generalization and reasoning abilities, making them well-suited for complex decision-making tasks such as medical consultation (MC). However, existing LLM-based methods often fail to capture the dual nature of MC, which entails two distinct sub-tasks: symptom inquiry, a sequential decision-making process, and disease diagnosis, a classification problem. This mismatch often results in ineffective symptom inquiry and unreliable disease diagnosis. To address this, we propose DDO, a novel LLM-based framework that performs Dual-Decision Optimization by decoupling the two sub-tasks and optimizing them with distinct objectives through a collaborative multi-agent workflow. Experiments on three real-world MC datasets show that DDO consistently outperforms existing LLM-based approaches and achieves competitive performance with state-of-the-art generation-based methods, demonstrating its effectiveness in the MC task. The code is available at https://github.com/zh-jia/DDO.
14
+
15
+ # 1 Introduction
16
+
17
+ Medical Consultation (MC), aiming to automate symptom collection and support clinical diagnosis, has become a promising application in AI-driven healthcare and attracted growing attention (Zhao et al., 2024; Hu et al., 2024; Chopra and Shah, 2025). As shown in Figure 1, MC involves multi-turn interactions between an AI doctor and a patient, encompassing two core decision-making processes: symptom inquiry—a sequential decision task over a large action space—and disease diagnosis—a classification task over a limited set of candidate diseases (Chen et al., 2023, 2024). The effectiveness of MC hinges on the AI doctor's ability to perform both efficient information seeking and accurate disease differentiation.
18
+
19
+ ![](images/9b5e64db0f7ec90e0a3bd2a62b0b4740cd683902341654dd363f3aa5497da171.jpg)
20
+ Figure 1: An example of a Medical Consultation (MC) task, where an AI doctor iteratively inquires about additional symptoms based on the patient's initial self-reported symptoms and ultimately provides a diagnosis.
21
+
22
+ Compared to the models used in traditional reinforcement learning (RL)-based methods (Zhong et al., 2022; Yan et al., 2023) and generation-based approaches (Hou et al., 2023; Zhao et al., 2024), large language models (LLMs) provide stronger generalization and more transparent reasoning (Qin et al., 2024; Singh et al., 2024), potentially reducing training costs and improving interpretability for the MC task. However, due to hallucinations and limited domain adaptation, directly applying LLMs to MC often results in inefficient inquiry and unreliable diagnosis. Recent approaches (Hu et al., 2024; Chopra and Shah, 2025) improve information gathering by incorporating LLM-based planning, yet overlook diagnostic optimization. To jointly optimize symptom inquiry and disease diagnosis, Chen et al. (2024) introduced the Chain-of-Diagnosis (CoD) framework, enabling LLMs to learn both components from CoD training data. Nevertheless, the fundamentally different nature of these two decision-making sub-tasks presents
23
+
24
+ significant challenges for unified optimization.
25
+
26
+ To overcome these limitations, we propose DDO, an LLM-based MC approach that leverages multi-agent collaboration to decouple and optimize the two core decision-making components with distinct objectives. For symptom inquiry, DDO integrates a lightweight RL-based policy agent that generates reliable candidate actions, thereby reducing the decision-making burden on LLMs. For disease diagnosis, DDO derives fine-grained diagnostic confidence from LLM logits and enhances disease discrimination through a plug-and-play adapter trained via in-batch contrastive learning. Experiments on three real-world MC datasets demonstrate that DDO consistently outperforms other LLM-based methods and achieves performance on par with state-of-the-art (SOTA) generation-based approaches, while requiring substantially less training overhead. Our contributions are as follows:
27
+
28
+ - We introduce DDO, a novel multi-agent framework for the MC task, where four collaborative agents enable an effective and transparent diagnostic MC workflow.
29
+ - DDO decouples the two core decision-making processes—symptom inquiry and disease diagnosis—and optimizes them with distinct objectives, leading to more informative questioning and improved diagnostic accuracy.
30
+ - By tuning only a small number of model parameters, DDO surpasses other LLM-based methods and achieves performance comparable to SOTA generation-based approaches.
31
+
32
+ # 2 Related Work
33
+
34
+ # 2.1 Medical Consultation Task
35
+
36
+ Medical Consultation (MC), a key application of AI in medicine (Valizadeh and Parde, 2022), was initially formulated as a Markov Decision Process (MDP) and optimized using reinforcement learning (RL) (Tang et al., 2016; Wei et al., 2018; Kao et al., 2018). However, due to the high variability of RL agents (Xia et al., 2020), researchers have incorporated disease-symptom prior knowledge to enhance the decision-making (Xu et al., 2019; Liu et al., 2022; Yan et al., 2023) by the agents. HRL (Zhong et al., 2022) introduced a hierarchical RL framework to refine the action space. Additionally, generative approaches such as Diaformer (Chen et al., 2022), CoAD (Wang et al., 2023), MTDiag (Hou et al., 2023) and HAIformer (Zhao et al., 2024)
37
+
38
+ leveraged attention mechanisms to enhance optimization efficiency, achieving SOTA performance in the MC task. AIME (Tu et al., 2024) demonstrated the potential of LLMs in medical history-taking by learning from realistic medical dialogues. MediQ (Li et al., 2024b) introduced an abstention module to assess whether the collected diagnostic evidence is sufficient. UoT (Hu et al., 2024) and MISQ-HF (Chopra and Shah, 2025) aimed to reduce decision uncertainty through LLM-driven planning. CoD (Chen et al., 2024) improved the interpretability of doctor agent's decisions by generating transparent chained thought processes.
39
+
40
+ # 2.2 LLMs in Medical Decision-Making
41
+
42
+ LLMs have demonstrated strong potential across various medical applications (Zhou et al., 2024). They are capable of answering medical exam questions (Kim et al., 2024; Shi et al., 2024), collecting patient history (Johri et al., 2024), offering diagnostic suggestions (Jia et al., 2025; Rose et al., 2025), and recommending treatment plans (Li et al., 2024a). Leveraging prompt engineering (Zheng et al., 2024; Liu et al., 2024) and domain adaptation techniques (Tian et al., 2024; Wang et al., 2025), their reasoning capabilities have significantly improved, leading to more reliable medical decision-making. Moreover, to tackle more complex tasks in the medical domain, recent studies (Kim et al., 2024; Bani-Harouni et al., 2024) have explored the use of multiple LLM agents, offering promising directions for enabling collaborative decision-making in challenging clinical scenarios.
43
+
44
+ # 3 Problem Definition
45
+
46
+ A real-world Medical Consultation Record (MCR) is denoted as $\mathcal{P} = \{\mathcal{S}^{\mathrm{ex}},\mathcal{S}^{\mathrm{im}},d_l\}$ , where $\mathcal{S}^{\mathrm{ex}} = \{(s_i^{\mathrm{ex}},p_i^{\mathrm{ex}})\}_{i = 1}^{l_1}$ represents explicit symptoms initially reported by the patient, and implicit symptoms $\mathcal{S}^{\mathrm{im}} = \{(s_j^{\mathrm{im}},p_j^{\mathrm{im}})\}_{j = 1}^{l_2}$ are elicited through follow-up inquiries by the doctor. The label $d_{l}$ denotes the ground-truth disease of the patient.
47
+
48
+ The MC task simulates a multi-turn interaction process between an AI doctor and a simulated patient, where the AI doctor actively collects diagnostic information to facilitate differential diagnosis. Starting from the initial symptoms $S^{\mathrm{ex}}$ , the AI doctor selectively inquires about additional symptoms $S^{\mathrm{ad}}$ to accumulate diagnostic evidence. The interaction terminates when sufficient information is collected or a predefined maximum number of turns $L$
49
+
50
+ ![](images/d4770bba0a747952c3a427ac4abd9f077ecf077a0d960f11835cc9375279fb85.jpg)
51
+ Figure 2: Overview of the proposed DDO framework, comprising four collaborative agents operating over a shared memory to execute the consultation workflow: the Diagnosis Agent estimates disease confidences from LLM logits; the Policy Agent generates candidate actions via masked sampling; the Inquiry Agent selects the optimal symptom to query or terminates the consultation; and the Patient Agent responds based on the patient profile.
52
+
53
+ is reached. The final diagnosis $d_p$ is selected from the candidate set $D = \{d_1, d_2, \ldots, d_n\}$ based on the highest diagnostic confidence:
54
+
55
+ $$
56
+ d _ {p} = \arg \max _ {d _ {i} \in D} \operatorname {c o n f} _ {i} \left(\mathcal {S} ^ {\mathrm {e x}} \cup \mathcal {S} ^ {\mathrm {a d}}, d _ {i}\right) \tag {1}
57
+ $$
58
+
59
+ # 4 Multi-Agent Collaborative Medical Consultation in DDO
60
+
61
+ To enhance the decision-making capability of LLMs in the MC task, the DDO framework integrates three LLM-based agents—Diagnosis Agent, Inquiry Agent, and Patient Agent—alongside an RL-based Policy Agent implemented with an actor-critic architecture. The Diagnosis Agent employs a learnable adapter to enhance the LLM's ability to perform differential diagnosis, while the Policy Agent and the Inquiry Agent collaborate to strategically acquire informative symptoms. All agents operate over a shared memory, consisting of (1) a static component that encodes prior disease–symptom knowledge exclusively derived from the training portion of the dataset (thus avoiding any leakage of test labels), and (2) a dynamic component that is continuously updated with observed symptoms and diagnostic confidence throughout the consultation process.
62
+
63
+ As shown in Figure 2, each consultation round proceeds through four steps: 1) Diagnostic Con
64
+
65
+ Evidence Estimation—The Diagnosis Agent estimates confidence scores for each candidate disease based on the current diagnostic evidence. 2) Candidate Actions Generation—The Policy Agent samples multiple interaction actions based on the current state, providing a set of reliable choices for the next inquiry. 3) Optimal Inquiry Action Selection—The Inquiry Agent selects the most informative symptom checking action from the candidate actions. 4) Patient Response Simulation—The Patient Agent simulates the patient's responses, indicating whether the inquired symptom is present.
66
+
67
+ # 4.1 Diagnostic Confidence Estimation
68
+
69
+ # 4.1.1 Binary Token Probability
70
+
71
+ The Diagnosis Agent estimates the diagnostic confidence score $conf_{i} \in (0,1)$ for each candidate disease $d_{i} \in D$ , reflecting the likelihood of $d_{i}$ being the correct diagnosis given the current evidence. Rather than relying on decoding to generate numeric scores (Li et al., 2024b; Chen et al., 2024; Qin et al., 2024), the Diagnosis Agent adopts Binary Token Probability (BTP)—a logit-based method inspired by multiple-choice QA (Detommaso et al., 2024; Kumar et al., 2024)—to provide a more efficient and interpretable confidence estimation for LLMs.
72
+
73
+ Specifically, given a structured prompt that in-
74
+
75
+ tegrates the current evidence with disease-specific knowledge of $d_{i}$ retrieved from the shared memory, the LLM-based Diagnosis Agent is instructed to output a single binary token (True/False) indicating whether $d_{i}$ is a plausible diagnosis. We extract the logits distribution at the position of this binary token and obtain the logits corresponding to True and False, denoted as $\mathrm{logit}_{\mathrm{T}}$ and $\mathrm{logit}_{\mathrm{F}}$ , respectively. The diagnostic confidence is finally computed via a temperature-scaled softmax over the binary logits, where the temperature $\tau$ controls the sharpness of the logits distribution:
76
+
77
+ $$
78
+ c o n f _ {i} = \frac {\exp (\log \mathrm {i t} _ {\mathrm {T}} / \tau)}{\exp (\log \mathrm {i t} _ {\mathrm {T}} / \tau) + \exp (\log \mathrm {i t} _ {\mathrm {F}} / \tau)} \tag {2}
79
+ $$
80
+
81
+ # 4.1.2 Calibrating the Diagnostic Confidence
82
+
83
+ Diagnostic confidence scores from base LLMs often lack discriminative power when candidate diseases share overlapping symptoms—e.g., both upper respiratory tract infections and pneumonia commonly present with fever and cough. Existing calibration methods typically require fine-grained supervision (Detommaso et al., 2024; Chen et al., 2024), such as expert-annotated confidence scores, which are often infeasible to implement in clinical practice. Instead, we treat diagnosis as a multi-class classification task (Ma et al., 2024) and leverage ground-truth disease labels $d_l$ from Medical Consultation Records (MCRs) as weak supervision to calibrate the diagnostic confidence.
84
+
85
+ To construct the calibration training data, we generate partial consultation trajectories by truncating each full MCR at different interaction steps. For a training MCR $\mathcal{P}$ with $k$ turns, we extract $(k - l_{\mathrm{self}} + 1)$ sub-trajectories of the form $\mathcal{P}c = \{(s_1,p_1),\ldots ,(s_c,p_c),d_l\}$ , where $l_{\mathrm{self}}$ is the number of self-reported symptoms and $c < l$ . Each sub-trajectory $\mathcal{P}_c$ serves as a training data.
86
+
87
+ We calibrate diagnostic confidence through in-batch contrastive learning (Ma et al., 2024), training a lightweight adapter using LoRA (Hu et al., 2022) to improve the Diagnosis Agent's ability to distinguish among similar diseases. For each patient sub-trajectory $\mathcal{P}c$ , the ground-truth diagnosis $d_{l}$ is treated as the positive instance, while all other candidate diseases serve as negatives. We construct a target distribution $\mathrm{dist}_{\mathrm{target}} = [\epsilon, \dots, 1 - \epsilon, \dots, \epsilon]$ , where $\epsilon$ is a label smoothing constant. The Diagnosis Agent outputs confidence scores $\{conf_{i}\}_{i=1}^{n}$ using the BTP method, yielding a batch-level predictive distribution $\mathrm{dist}_{\mathrm{diag}}$ . The calibration objective
88
+
89
+ minimizes the KL divergence between the target and predicted distributions:
90
+
91
+ $$
92
+ \mathcal {L} _ {\mathrm {K L}} = \sum_ {i = 1} ^ {n} \operatorname {d i s t} _ {\text {t a r g e t}} \left(d _ {i}\right) \log \frac {\operatorname {d i s t} _ {\text {t a r g e t}} \left(d _ {i}\right)}{\operatorname {d i s t} _ {\text {d i a g}} \left(d _ {i}\right)} \tag {3}
93
+ $$
94
+
95
+ # 4.2 Candidate Actions Generation
96
+
97
+ Symptom inquiry poses a significant challenge due to the high-dimensional action space, which limits the LLM's ability to identify the most informative symptoms. A natural solution is to reduce decision complexity by supplying a small set of reliable candidate symptoms. Since each inquiry depends only on the current state, the process satisfies the Markov property, making reinforcement learning (RL) well-suited for this task (Sun et al., 2024). Unlike RLHF approaches that fine-tune LLM parameters—such as GRPO (Ramesh et al., 2024) in DeepSeek-R1 (Guo et al., 2025)—we adopt a lightweight RL policy model as an external agent to guide the LLM's inquiry decisions.
98
+
99
+ # 4.2.1 Observation State and Action Space
100
+
101
+ In reinforcement learning, the observation state encodes the information available to the agent at each decision step, while the action space defines the set of allowable actions.
102
+
103
+ We define the observation state as $S = [p, c]$ , where $p \in \{-1, 0, 1\}^m$ is an $m$ -dimensional symptom vector indicating absence $(-1)$ , unknown status $(0)$ , or presence $(1)$ of each symptom (initialized to $0$ ), and $c \in \mathbb{R}^n$ is a diagnostic confidence vector over $n$ candidate diseases.
104
+
105
+ The action space $\mathcal{A} = \{a_i\}_{i=1}^{m+1}$ comprises $m$ inquiry actions—each $a_i$ corresponds to check for the $i$ -th symptom—and a termination action $a_{m+1}$ to end the consultation. To reflect clinical heuristics where physicians prioritize symptoms relevant to likely diagnoses (Stanley and Campos, 2013), we introduce a binary action mask $\mathcal{M} \in \{0,1\}^{m+1}$ to constrain the action space. The mask enables actions $(\mathcal{M}_i = 1)$ associated with symptoms relevant to the top- $w$ ranked diseases and disables actions that have already selected or deemed irrelevant $(\mathcal{M}_i = 0)$ . The final masked action space is:
106
+
107
+ $$
108
+ \mathcal {A} _ {\text {m a s k e d}} = \mathcal {A} \odot \mathcal {M} \tag {4}
109
+ $$
110
+
111
+ # 4.2.2 RL Policy Learning
112
+
113
+ We adopt an actor-critic architecture to jointly learn the policy $\pi$ , which is implemented via multi-layer perceptron (MLP) layers. The policy $\pi$ outputs a
114
+
115
+ log-probability distribution over actions. Training is conducted using Proximal Policy Optimization (PPO) (Schulman et al., 2017), which maximizes the total reward return $\mathcal{R}$ , composed of both short-term and long-term components.
116
+
117
+ The short-term reward $\mathcal{R}_{\mathrm{short}}$ is computed after each doctor-patient interaction:
118
+
119
+ $$
120
+ \mathcal {R} _ {\text {s h o r t}} \left(S _ {t}, a _ {t}, S _ {t + 1}\right) = f r e q \left(a _ {t}\right) + r _ {\text {h i t}} + r _ {\text {r a n k}}, \tag {5}
121
+ $$
122
+
123
+ where $\text{freq}(a_t)$ denotes the frequency of symptom $s_t$ (corresponding to action $a_t$ ) among the relevant symptoms of the ground-truth disease $d_l$ , with negative values assigned to irrelevant symptoms. The term $r_{\mathrm{hit}}$ is positive if $s_t$ is present in the patient profile $\mathcal{P}$ and negative otherwise. The term $r_{\mathrm{rank}}$ measures the change in the confidence ranking of $d_l$ from state $S_t$ to $S_{t+1}$ , assigning positive reward for improved ranking, negative for worsened ranking, and zero if unchanged.
124
+
125
+ The long-term reward $\mathcal{R}_{\mathrm{long}}(d_p)$ assesses the final diagnostic prediction $d_{p}$ , yielding a positive reward if $d_{p} = d_{l}$ and a negative reward otherwise.
126
+
127
+ # 4.2.3 Masked Sampling for Candidate Actions
128
+
129
+ The RL policy model's sampling nature inherently prevents the guarantee of optimal actions. However, by performing multiple sampling iterations, we can leverage this characteristic to provide the LLM with a reliable set of candidate actions, thus avoiding decision-making within a large action space.
130
+
131
+ Specifically, given the current state $S$ , the Policy Agent $\pi$ samples $N$ actions from the masked action space to construct a candidate action set $A_{\mathrm{candi}}$ (Equation 6). If the sampled candidate action set includes the termination action $a_{m + 1}$ , the consultation process is terminated.
132
+
133
+ $$
134
+ A _ {\text {c a n d i}} = \left\{a _ {i} \sim \pi \left(\mathcal {A} _ {\text {m a s k e d}} \mid \mathcal {S}\right) \right\} _ {i = 1} ^ {N} \tag {6}
135
+ $$
136
+
137
+ # 4.3 Optimal Inquiry Action Selection
138
+
139
+ Upon receiving the candidate action set $A_{\mathrm{candi}}$ , the Inquiry Agent selects the most informative inquiry action $a_o$ , corresponding to symptom $s_o$ , by following a set of predefined reasoning strategies. Specifically, it either: (1) attempts to confirm the most probable diagnosis by prioritizing symptoms that are highly representative of the top-ranked disease, or (2) selects the symptom most relevant to the currently collected diagnostic evidence. If none of the candidate actions meet the selection criteria, the Inquiry Agent requests the Policy Agent to regenerate a new set of actions. By engaging in
140
+
141
+ step-by-step reasoning, the Inquiry Agent ensures that the selected symptom $s_o$ maximizes diagnostic value while maintaining interpretability and transparency throughout the decision-making process.
142
+
143
+ # 4.4 Patient Response Simulation
144
+
145
+ The Patient Agent responds to each inquiry from the Inquiry Agent based on the patient profile $\mathcal{P}$ . If the queried symptom $s_o$ is explicitly documented, it returns the recorded status $p_o$ . However, since real-world Medical Consultation Records (MCRs) typically contain only a limited subset of symptom annotations, many queried symptoms may be undocumented, leading to ineffective queries and ambiguous responses.
146
+
147
+ To reduce this uncertainty bias, we leverage the disease label $d_{l}$ from the MCR and incorporate clinical knowledge for inference. If the queried symptom is uncommon in the clinical presentation of $d_{l}$ , the agent infers it is likely absent; if the symptom is strongly associated with $d_{l}$ , it is inferred to be likely present. Once the presence status $p_{o}$ is determined, the shared diagnostic memory is updated as: $E = E \cup \{(s_{o},p_{o})\}$ .
148
+
149
+ In DDO, the multi-agent collaborative consultation proceeds for up to $L$ turns, and terminates either when the turn limit is reached or when the termination action is sampled by the Policy Agent. The disease with the highest diagnostic confidence is then selected as the final diagnosis.
150
+
151
+ # 5 Experiments
152
+
153
+ # 5.1 Baselines
154
+
155
+ # 5.1.1 Traditional Methods.
156
+
157
+ We compare the proposed DDO framework with two state-of-the-art generation-based methods in the MC task: MTDiag (Hou et al., 2023), which independently optimizes symptom inquiry and disease diagnosis, and HAlformer (Zhao et al., 2024), which leverages human-AI collaboration. We additionally include EBAD (Yan et al., 2023) as a reinforcement learning-based baseline.
158
+
159
+ # 5.1.2 LLM-Based Methods
160
+
161
+ We compare the DDO framework with three LLM-based methods in the MC task: Uncertainty of Thoughts (UoT) (Hu et al., 2024), Chain-of-Diagnosis (CoD) (Chen et al., 2024), and Direct Prompting (DP). UoT plans future inquiries by computing information gain. CoD employs instruction tuning to teach LLMs transparent diagnostic
162
+
163
+ reasoning. DP relies solely on the inherent capabilities of the LLM, without prompt engineering or instruction tuning.
164
+
165
+ # 5.2 Datasets
166
+
167
+ We evaluate the proposed DDO framework and the baseline methods on three real-world medical consultation datasets: DXY (Xu et al., 2019), collected from online medical dialogues, and GMD (Liu et al., 2022) and CMD (Yan et al., 2023), both derived from electronic medical records (EMRs). Dataset statistics are summarized in Table 1. Other dataset details can be found in Appendix A.1.
168
+
169
+ <table><tr><td>Dataset</td><td>DXY</td><td>GMD</td><td>CMD</td></tr><tr><td># Total MCR Samples</td><td>527</td><td>2,374</td><td>5,200</td></tr><tr><td># Disease Types</td><td>5</td><td>12</td><td>27</td></tr><tr><td># Symptom Types</td><td>41</td><td>118</td><td>358</td></tr><tr><td># Avg. Symptoms</td><td>4.74</td><td>5.55</td><td>17.92</td></tr></table>
170
+
171
+ Table 1: Dataset statistics, including the number of medical consultation records (MCR), disease types, symptom types, and average recorded symptoms per sample.
172
+
173
+ # 5.3 Evaluation Metrics
174
+
175
+ Diagnostic Accuracy. We use diagnostic accuracy (Acc) as an evaluation metric, which measures the proportion of test cases where the ground-truth disease is correctly identified from a limited set of candidate diseases. This metric reflects the model's ability to perform accurate differential diagnosis.
176
+
177
+ Average Turns. The average number of inquiry turns Avg.n is calculated as the total number of symptom queries divided by the number of test samples, reflecting the efficiency and informativeness of the multi-turn consultation process.
178
+
179
+ # 5.4 Implementation Details
180
+
181
+ For traditional baselines, we reproduce EBAD and HAIformer following their original implementations, while the MTDiag results are adopted from their paper. All LLM-based baselines are re-implemented, where UoT adopts its pruned version to improve planning efficiency. The backbone LLMs include the Qwen2.5 series (Yang et al., 2024), GPT-4o-mini (Achiam et al., 2023), and DiagnosisGPT (Chen et al., 2024), which is finetuned to align with the CoD consultation process. For methods that involve parameter tuning, including CoD and our proposed DDO, we use locally deployed LLMs; other LLM-based baselines rely on API-based models. The pruning percentage in
182
+
183
+ UoT is set to -0.5. The threshold $\tau$ in CoD is set to 0.5. All LLM-based approaches adopt the same LLM for both doctor and patient agents. The maximum of doctor-patient interaction turns $L$ is set to 10 for all the methods. More implementation details can be found in the Appendix A.
184
+
185
+ # 5.5 Overall Performance
186
+
187
+ Table 2 summarizes the main experimental results of the proposed DDO framework and baseline methods across the three MC datasets.
188
+
189
+ Comparison with Traditional Methods. DDO achieves diagnostic accuracy on par with traditional baselines while substantially reducing training overhead. For instance, the SOTA baseline HAIformer adopts a multi-stage training pipeline requiring hundreds of epochs for training its diagnostic module, while DDO only needs a few epochs for confidence calibration—less than one epoch on both the GMD and CMD datasets. This efficiency stems from the strong generalization capabilities of LLMs, which enable effective domain adaptation with a small number of parameter tuning. Moreover, the inherent reasoning ability of LLMs contributes to the interpretability of the MC task.
190
+
191
+ Comparison with LLM-based Methods. Compared to other LLM-based methods, DDO significantly improves diagnostic effectiveness. After symptom inquiry, it boosts diagnostic accuracy by an average of $24.6\%$ , $11.3\%$ , and $3.2\%$ on the DXY, GMD, and CMD datasets, respectively, over initial diagnoses based only on self-reported symptoms. DDO consistently achieves the highest accuracy, notably $94.2\%$ on DXY. The DP baseline reflects the raw inquiry behavior of LLMs, where the lack of external guidance results in arbitrary questioning and unreliable diagnoses. UoT improves upon DP by using LLM-based planning to prioritize symptoms with the highest expected information gain. However, its pruning strategy—eliminating candidate diseases as soon as key symptoms are denied—restricts comprehensive evidence gathering, often resulting in a small Avg.n and ultimately limiting diagnostic performance. CoD attempts to jointly optimize symptom inquiry and diagnosis via large-scale synthetic reasoning data but yields negative performance gains. This is likely due to the intrinsic differences between the two sub-tasks, which hinder effective unified learning. In contrast, DDO decouples the two decision-making processes in the MC task, enhancing them separately through a lightweight RL policy and a diagnostic adapter.
192
+
193
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">LLM</td><td colspan="3">DXY</td><td colspan="3">GMD</td><td colspan="3">CMD</td></tr><tr><td>Accinit</td><td>Acc</td><td>Avg.n</td><td>Accinit</td><td>Acc</td><td>Avg.n</td><td>Accinit</td><td>Acc</td><td>Avg.n</td></tr><tr><td colspan="11">Traditional Methods</td></tr><tr><td>EBAD (Yan et al., 2023)</td><td>-</td><td>-</td><td>72.1</td><td>7.0</td><td>-</td><td>78.7</td><td>7.4</td><td>-</td><td>64.1</td><td>9.0</td></tr><tr><td>MTDiag (Hou et al., 2023)</td><td>-</td><td>-</td><td>81.9</td><td>9.6</td><td>-</td><td>85.9</td><td>9.6</td><td>-</td><td>-</td><td>-</td></tr><tr><td>HALformer (Zhao et al., 2024)</td><td>-</td><td>-</td><td>88.5</td><td>1.8</td><td>-</td><td>90.4</td><td>1.9</td><td>-</td><td>71.1</td><td>3.6</td></tr><tr><td colspan="11">LLM-based Methods</td></tr><tr><td rowspan="4">DP</td><td>Qwen2.5-72B-Instruct</td><td>59.6</td><td>64.4</td><td>9.0</td><td>59.8</td><td>64.4</td><td>9.9</td><td>44.2</td><td>46.8</td><td>9.8</td></tr><tr><td>GPT-4o-mini</td><td>57.7</td><td>61.5</td><td>10.0</td><td>57.3</td><td>65.7</td><td>10.0</td><td>45.9</td><td>49.5</td><td>10.0</td></tr><tr><td>Qwen2.5-14B-Instruct</td><td>54.8</td><td>53.8</td><td>10.0</td><td>55.2</td><td>61.1</td><td>10.0</td><td>42.2</td><td>45.7</td><td>10.0</td></tr><tr><td>Qwen2.5-7B-Instruct</td><td>59.6</td><td>63.5</td><td>10.0</td><td>54.8</td><td>57.3</td><td>10.0</td><td>46.8</td><td>46.2</td><td>10.0</td></tr><tr><td rowspan="4">UoT (Hu et al., 2024)</td><td>Qwen2.5-72B-Instruct</td><td>-</td><td>67.3</td><td>0.1</td><td>-</td><td>68.6</td><td>0.1</td><td>-</td><td>34.6</td><td>0.1</td></tr><tr><td>GPT-4o-mini</td><td>-</td><td>64.4</td><td>0.1</td><td>-</td><td>65.3</td><td>0.4</td><td>-</td><td>23.0</td><td>1.3</td></tr><tr><td>Qwen2.5-14B-Instruct</td><td>-</td><td>60.6</td><td>0.2</td><td>-</td><td>61.1</td><td>0.6</td><td>-</td><td>32.6</td><td>1.3</td></tr><tr><td>Qwen2.5-7B-Instruct</td><td>-</td><td>61.5</td><td>2.2</td><td>-</td><td>71.1</td><td>0.5</td><td>-</td><td>32.5</td><td>1.6</td></tr><tr><td rowspan="2">CoD (Chen et al., 2024)</td><td>DiagnosisGPT-34B</td><td>61.5</td><td>53.8</td><td>3.9</td><td>54.0</td><td>44.8</td><td>3.6</td><td>46.1</td><td>34.6</td><td>3.7</td></tr><tr><td>DiagnosisGPT-6B</td><td>61.5</td><td>36.9</td><td>5.1</td><td>56.1</td><td>37.2</td><td>3.9</td><td>46.6</td><td>28.5</td><td>3.4</td></tr><tr><td rowspan="2">DDO(Ours)</td><td>Qwen2.5-14B-Instruct</td><td>66.3</td><td>94.2</td><td>10.0</td><td>67.8</td><td>80.3</td><td>9.8</td><td>65.3</td><td>68.6</td><td>10.0</td></tr><tr><td>Qwen2.5-7B-Instruct</td><td>66.3</td><td>87.5</td><td>9.9</td><td>69.5</td><td>79.5</td><td>9.6</td><td>60.6</td><td>63.6</td><td>10.0</td></tr></table>
194
+
195
+ Table 2: Overall performance of DDO and baseline methods on three public medical consultation (MC) datasets. Accinit denotes diagnostic accuracy without any symptom inquiry. Bold numbers indicate the best performance, underlined numbers indicate the second-best. All diagnostic accuracy results are reported as percentages.
196
+
197
+ <table><tr><td rowspan="2">Method</td><td colspan="2">DXY</td><td colspan="2">GMD</td><td colspan="2">CMD</td></tr><tr><td>Acc</td><td>Avg.n</td><td>Acc</td><td>Avg.n</td><td>Acc</td><td>Avg.n</td></tr><tr><td>DDO(Ours)</td><td>87.5</td><td>9.9</td><td>79.5</td><td>9.6</td><td>63.6</td><td>10.0</td></tr><tr><td>w/o adapter</td><td>86.5</td><td>9.9</td><td>78.7</td><td>9.6</td><td>54.2</td><td>10.0</td></tr><tr><td>w/o policy</td><td>77.7</td><td>10.0</td><td>73.6</td><td>10.0</td><td>60.3</td><td>10.0</td></tr><tr><td>w/o masking</td><td>83.5</td><td>9.9</td><td>74.5</td><td>10.0</td><td>61.8</td><td>10.0</td></tr><tr><td>w/o retry</td><td>83.5</td><td>10.0</td><td>78.7</td><td>9.8</td><td>63.0</td><td>10.0</td></tr><tr><td>w/o decision</td><td>84.6</td><td>9.8</td><td>78.2</td><td>9.8</td><td>63.2</td><td>10.0</td></tr></table>
198
+
199
+ This modular approach yields substantial gains in diagnostic accuracy.
200
+
201
+ # 5.6 Ablation Study
202
+
203
+ As shown in Table 3, we perform ablation experiments on three MC datasets, using Qwen2.5-7B-Instruct as the LLM backbone.
204
+
205
+ Impact of Diagnostic Adapter. Removing the diagnostic adapter (w/o adapter)—using only the vanilla BTP method to estimate diagnostic confidence—results in a drop in diagnostic accuracy, with the largest decline observed on the CMD dataset. This demonstrates the effectiveness of the in-batch contrastive learning-based adapter in enhancing the disease discrimination ability of LLMs. Impact of RL-LLM Collaboration. To assess the effectiveness of RL-LLM collaborative symptom inquiry process, we conduct ablation experiments in which symptom inquiry is performed solely by
206
+
207
+ Table 3: Ablation results of DDO. $w/o$ adapter denotes removing the diagnostic adapter. $w/o$ policy and $w/o$ decision use the LLM and RL model for symptom inquiry, respectively. $w/o$ masking disables action space masking. $w/o$ retry omits candidate actions regeneration.
208
+
209
+ <table><tr><td rowspan="2">Method</td><td colspan="2">DXY</td><td colspan="2">GMD</td><td colspan="2">CMD</td></tr><tr><td>Accinit</td><td>Acc</td><td>Accinit</td><td>Acc</td><td>Accinit</td><td>Acc</td></tr><tr><td>Numerical</td><td>62.5</td><td>78.8</td><td>53.6</td><td>74.5</td><td>29.7</td><td>39.4</td></tr><tr><td>Numerical-SC</td><td>65.4</td><td>77.9</td><td>54.4</td><td>74.9</td><td>32.8</td><td>45.3</td></tr><tr><td>FirstLogit</td><td>59.6</td><td>70.2</td><td>60.3</td><td>74.1</td><td>40.5</td><td>43.7</td></tr><tr><td>AvgLogit</td><td>42.3</td><td>75.0</td><td>59.4</td><td>74.5</td><td>25.2</td><td>28.8</td></tr><tr><td>BTP</td><td>63.5</td><td>86.5</td><td>63.6</td><td>78.7</td><td>54.2</td><td>54.2</td></tr><tr><td>BTP-adapter(Ours)</td><td>66.3</td><td>87.5</td><td>69.5</td><td>79.5</td><td>60.6</td><td>63.6</td></tr></table>
210
+
211
+ Table 4: Comparison of diagnostic performance across different confidence estimation methods for LLMs.
212
+
213
+ the RL policy (w/o decision) or solely by the LLM (w/o policy). Results show that both variants perform significantly worse, with the LLM-only variant exhibiting a greater performance drop. This highlights the advantage of multi-agent collaboration in DDO for conducting effective symptom inquiry. Moreover, removing action space masking (w/o masking) or disabling the regeneration mechanism for low-quality actions (w/o retry) also degrades performance, underscoring their role in ensuring reliable inquiry decisions.
214
+
215
+ # 5.7 Evaluation of Diagnostic Performance in LLM Confidence Estimation
216
+
217
+ To further assess the diagnostic effectiveness of our BTP-adapter, we compare it with several alternative confidence estimation methods for LLMs using three MC datasets. All methods are evaluated with the same initial and final symptom sequences, and the results are presented in Table 4.
218
+
219
+ Decoding-based Methods. Numerical and Numerical-SC prompt the LLM to directly gen
220
+
221
+ ![](images/725baa1d9c9d4ef09bae3768c0c1633feb6dd17484f301077e3b6797543524af.jpg)
222
+ Figure 3: Effect of max turns $L$ .
223
+
224
+ erate a confidence score between 0 and 1 (Li et al., 2024b), with SC indicating the use of Self-Consistency (Wang et al., 2022). These approaches show significantly lower diagnostic accuracy than the BTP-adapter, highlighting the limitations of decoding-based confidence estimation and the advantages of our logits-based strategy.
225
+
226
+ Logits-based Methods. Similar to our approach, FirstLogits and AvgLogits (Ma et al., 2025) estimate confidence based on the logits of the first generated token. However, their diagnostic performance is notably inferior to that of the BTP-adapter. This may be due to their practice of computing confidence scores for all candidate diseases in a single generation process, which can lead to context interference. In contrast, BTP-adapter independently evaluates each candidate diagnosis, effectively mitigating such interference. We also compare against the original BTP method without the diagnostic adapter. Incorporating the adapter consistently enhances diagnostic accuracy, particularly in the initial diagnosis, which is critical for guiding effective symptom inquiry during the early stages of MC.
227
+
228
+ # 5.8 Effect of Max Turns $L$
229
+
230
+ As shown in Figure 3, we evaluate the diagnostic performance of the DDO framework on the DXY and GMD datasets for different maximum interaction turns $(L = 0 / 5 / 10 / 15)$ . The line charts show an upward trend, indicating that increasing the maximum number of turns $L$ generally improves diagnostic accuracy. This suggests that the symptom inquiry process in DDO effectively collects critical diagnostic evidence. Notably, the most significant improvements occur in the early stages, with the gains diminishing as more turns are added—especially evident on the GMD dataset. One possible explanation is that DDO prioritizes inquiries for diseases
231
+
232
+ ![](images/ad3b59a6b547ff55e5b55f3586cc71c239a3598663c2c50e96fa9c52cc35fff8.jpg)
233
+ Figure 4: Diagnosis performance at the disease level on the GMD dataset.
234
+
235
+ with high initial diagnostic confidence. When the ground-truth disease $d_{l}$ initially ranks lower, additional inquiries may offer diminishing returns in terms of diagnostic accuracy.
236
+
237
+ # 5.9 Diagnostic Effectiveness Across Different Diseases
238
+
239
+ To assess diagnostic performance of DDO at a fine-grained level, we visualize the diagnostic accuracy before and after symptom inquiry across 12 candidate diseases in the GMD dataset, as shown in Figure 4. The results indicate that multi-turn symptom inquiry substantially enhances diagnostic accuracy for most diseases. Notably, the final accuracy for Coronary Artery Disease and Rhinitis reaches $100\%$ , highlighting the effectiveness of the collaborative symptom collection between the Policy Agent and Inquiry Agent in DDO. However, for certain diseases such as Conjunctivitis and Thyroiditis, the inquiry process yields no improvement in diagnostic accuracy, and even shows a slight decline for Thyroiditis. Further analysis reveals that some MCRs for these diseases include only one or two vague or non-specific self-reported symptoms, which causes these diseases to be ranked relatively low initially, making it difficult for the Agents to identify critical symptoms for accurate prediction.
240
+
241
+ In addition, Table 5 reports both macro-F1 and micro-F1 scores when comparing DDO with the strongest baseline HAIformer. Although DDO does not surpass this baseline, it achieves comparable performance within fewer training epochs. The micro-F1 results demonstrate that DDO preserves overall diagnostic accuracy, while the macro-F1
242
+
243
+ <table><tr><td rowspan="2">Method</td><td colspan="2">DXY</td><td colspan="2">GMD</td><td colspan="2">CMD</td></tr><tr><td>macro-F1</td><td>micro-F1</td><td>macro-F1</td><td>micro-F1</td><td>macro-F1</td><td>micro-F1</td></tr><tr><td>HAformer</td><td>88.8</td><td>88.5</td><td>90.1</td><td>90.4</td><td>69.3</td><td>71.1</td></tr><tr><td>DDO(Ours)</td><td>93.6</td><td>93.3</td><td>81.6</td><td>80.3</td><td>65.8</td><td>67.4</td></tr></table>
244
+
245
+ Table 5: Macro-F1 and Micro-F1 scores of DDO and the SOTA baseline on three MC datasets.
246
+
247
+ results highlight that DDO maintains relatively balanced diagnostic effectiveness across different diseases, avoiding bias toward high-frequency classes.
248
+
249
+ # 6 Discussion
250
+
251
+ In general, the larger the action space of an RL policy model, the more challenging the decision-making process becomes, and the more difficult the optimization tends to be. For example, we observed that with a smaller action space size (42 for DXY), the RL policy was able to sample more informative symptoms, whereas with a larger action space size (359 for CMD), the policy was more prone to sampling irrelevant ones. This motivates our use of masked sampling and LLM-voting during inference: by pruning the action space and leveraging RL-LLM collaboration, we aim to reduce the decision burden placed on the RL policy.
252
+
253
+ Improving Diagnostic Gains in Challenging Diseases. For certain diseases, diagnostic accuracy exhibits only modest gains—or even slight declines—after symptom inquiry (e.g., Thyroiditis in GMD). As discussed in Section 4, this likely stems from insufficient information in the initial symptom presentation of some MCR cases, which constrains the Agents' ability to identify and query informative symptoms. Possible remedies include: (1) enhancing the initial Top-K accuracy of the Diagnosis Agent for these challenging diseases, since a stronger initial ranking increases the chance of retrieving relevant symptoms and thereby improves inquiry effectiveness; and (2) refining the inquiry strategy by leveraging the LLM's reasoning capacity to filter out irrelevant diseases, thus avoiding excessive focus on unrelated conditions.
254
+
255
+ Joint Optimization of the Diagnosis Adapter and RL Policy. In this work, we assign different optimization objectives to the diagnosis module and the symptom inquiry module, and adopt a cascaded optimization scheme. This dual-decision optimization effectively enhances the overall performance of LLM-based medical consultation. Nevertheless, LLM-RL cascading training can be inherently unstable. We believe that further improvements could
256
+
257
+ be achieved by jointly optimizing the diagnosis adapter and the RL policy model. Such joint training may not only enable the diagnosis adapter to acquire more robust multi-disease diagnostic capabilities better aligned with interactive doctor-patient scenarios, but also facilitate more stable reward learning for the RL policy.
258
+
259
+ Mitigating Inference Overhead from Multi-Agent Collaboration. Our DDO framework employs four collaborating agents to accomplish the MC task, substantially enhancing the diagnostic performance of LLM-based methods. However, this multi-agent design also introduces significant inference overhead—for instance, a single multi-turn consultation per MCR typically takes 3-5 minutes. To mitigate this cost, a promising direction inspired by (Chopra and Shah, 2025) is to store completed consultation trajectories in a memory tree. When processing a new MCR, the system can first retrieve similar cases from the tree; if a close match is identified, the consultation can follow the retrieved trajectory, thereby reducing the number of LLM inference calls.
260
+
261
+ # 7 Conclusion
262
+
263
+ In this paper, we propose DDO, a novel LLM-based multi-agent collaborative framework designed to address the mismatch between existing LLM-based methods and the dual-decision nature of medical consultation (MC), which involves both sequential symptom inquiry and diagnosis over a constrained set of candidate diseases. DDO decouples these two decision-making processes and optimizes them with distinct objectives: it improves disease discrimination through a plug-and-play diagnostic adapter, and enhances information gathering via the synergy of an reinforcement learning-based policy agent on LLM-based inquiry agent. Experiments on three public MC datasets show that DDO consistently outperforms existing LLM-based baselines and achieves performance competitive with state-of-the-art generation-based methods, demonstrating its effectiveness in the MC task.
264
+
265
+ # Limitations
266
+
267
+ While we propose DDO to enhance the effectiveness of LLMs in the medical consultation task, several limitations remain: (1) Label granularity: During confidence calibration, DDO assigns a target confidence of 1 to the ground-truth disease and a small constant to all others. This hard-labeling
268
+
269
+ scheme may hinder the model's ability to softly distinguish between clinically similar diseases. (2) Model deployment: Since DDO requires training a diagnostic adapter for the underlying LLM, it is currently incompatible with API-based LLMs and must be deployed with locally hosted models. (3) Inference efficiency: DDO involves multi-agent reasoning, which introduces inference latency compared to traditional deep learning methods.
270
+
271
+ # Ethical Consideration
272
+
273
+ Due to the hallucination problem inherent in large language models, they may generate content that is not factually accurate. Therefore, the DDO framework proposed in this paper is intended solely for academic research. In real-world scenarios, medical decisions should always be based on professional diagnoses made by qualified physicians.
274
+
275
+ # Acknowledgments
276
+
277
+ This work was supported in part by National Natural Science Foundation of China (No. 62576363), the Natural Science Foundation of Hunan Province (No. 2025JJ50342) and Changsha Municipal Natural Science Foundation Project (No. kq2502144). This work was carried out in part using computing resources at the High-Performance Computing Center of Central South University.
278
+
279
+ # References
280
+
281
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
282
+ David Bani-Harouni, Nassir Navab, and Matthias Keicher. 2024. Magda: Multi-agent guideline-driven diagnostic assistance. In International workshop on foundation models for general medical AI, pages 163-172. Springer.
283
+ Junying Chen, Chi Gui, Anningzhe Gao, Ke Ji, Xidong Wang, Xiang Wan, and Benyou Wang. 2024. Cod, towards an interpretable medical agent using chain of diagnosis. arXiv preprint arXiv:2407.13301.
284
+ Junying Chen, Dongfang Li, Qingcai Chen, Wenxiu Zhou, and Xin Liu. 2022. Diaformer: Automatic diagnosis via symptoms sequence generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 4432-4440.
285
+ Wei Chen, Cheng Zhong, Jiajie Peng, and Zhongyu Wei. 2023. Dxformer: a decoupled automatic diagnostic system based on decoder-encoder transformer
286
+
287
+ with dense symptom representations. Bioinformatics, 39(1):btac744.
288
+ Harshita Chopra and Chirag Shah. 2025. Feedback-aware monte carlo tree search for efficient information seeking in goal-oriented conversations. arXiv preprint arXiv:2501.15056.
289
+ Gianluca Detommaso, Martin Bertran, Riccardo Fogliato, and Aaron Roth. 2024. Multicalibration for confidence scoring in llms. In Proceedings of the 41st International Conference on Machine Learning, pages 10624-10641.
290
+ Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948.
291
+ Zhenyu Hou, Yukuo Cen, Ziding Liu, Dongxue Wu, Baoyan Wang, Xuanhe Li, Lei Hong, and Jie Tang. 2023. Mtdiag: an effective multi-task framework for automatic diagnosis. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 14241-14248.
292
+ Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adaptation of large language models. ICLR, 1(2):3.
293
+ Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei W Koh, and Bryan Hooi. 2024. Uncertainty of thoughts: Uncertainty-aware planning enhances information seeking in llms. Advances in Neural Information Processing Systems, 37:24181-24215.
294
+ Mingyi Jia, Junwen Duan, Yan Song, and Jianxin Wang. 2025. medikal: Integrating knowledge graphs as assistants of llms for enhanced clinical diagnosis on emrs. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9278-9298.
295
+ Shreya Johri, Jaehwan Jeong, Benjamin A Tran, Daniel I Schlessinger, Shannon Wongvibulsin, Zhuo Ran Cai, Roxana Daneshjou, and Pranav Rajpurkar. 2024. Craft-md: A conversational evaluation framework for comprehensive assessment of clinical llms. In AAAI 2024 Spring Symposium on Clinical Foundation Models.
296
+ Hao-Cheng Kao, Kai-Fu Tang, and Edward Chang. 2018. Context-aware symptom checking for disease diagnosis using hierarchical reinforcement learning. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
297
+ Yubin Kim, Chanwoo Park, Hyewon Jeong, Yuk Siu Chan, Xuhai Xu, Daniel McDuff, Hyeonhoon Lee, Marzyeh Ghassemi, Cynthia Breazeal, Hae Park, et al. 2024. Mdagents: An adaptive collaboration of llms for medical decision-making. Advances in Neural Information Processing Systems, 37:79410-79452.
298
+
299
+ Abhishek Kumar, Robert Morabito, Sanzhar Umbet, Jad Kabbara, and Ali Emami. 2024. Confidence under the hood: An investigation into the confidence-probability alignment in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 315-334.
300
+ Junkai Li, Yunghwei Lai, Weitao Li, Jingyi Ren, Meng Zhang, Xinhui Kang, Siyu Wang, Peng Li, Ya-Qin Zhang, Weizhi Ma, et al. 2024a. Agent hospital: A simulacrum of hospital with evolvable medical agents. arXiv preprint arXiv:2405.02957.
301
+ Shuyue Stella Li, Vidhisha Balachandran, Shangbin Feng, Jonathan S Ilgen, Emma Pierson, Pang Wei Koh, and Yulia Tsvetkov. 2024b. Mediq: Question-asking llms and a benchmark for reliable interactive clinical reasoning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
302
+ Jiaxiang Liu, Yuan Wang, Jiawei Du, Joey Zhou, and Zuozhu Liu. 2024. Medcot: Medical chain of thought via hierarchical expert. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17371-17389.
303
+ Wenge Liu, Yi Cheng, Hao Wang, Jianheng Thang, Yafei Liu, Ruihui Zhao, Wenjie Li, Yefeng Zheng, and Xiaodan Liang. 2022. "my nose is running." "are you also coughing?": Building a medical diagnosis agent with interpretable inquiry logics. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, pages 4266-4272.
304
+ Mingyu Derek Ma, Yanna Ding, Zijie Huang, Jianxi Gao, Yizhou Sun, and Wei Wang. 2025. Inferring from logits: Exploring best practices for decoding-free generative candidate selection. arXiv preprint arXiv:2501.17338.
305
+ Mingyu Derek Ma, Xiaoxuan Wang, Yijia Xiao, Anthony Cuturrufo, Vijay S Nori, Eran Halperin, and Wei Wang. 2024. Memorize and rank: Elevating large language models for clinical diagnosis prediction. In GenAI for Health: Potential, Trust and Policy Compliance.
306
+ Jeremy Qin, Bang Liu, and Quoc Nguyen. 2024. Enhancing healthcare llm trust with atypical presentations recalibration. In *Findings of the Association for Computational Linguistics: EMNLP* 2024, pages 2520-2537.
307
+ Shyam Sundhar Ramesh, Yifan Hu, Iason Chaimalas, Viraj Mehta, Pier Giuseppe Sessa, Haitham Bou Ammar, and Ilija Bogunovic. 2024. Group robust preference optimization in reward-free rlhf. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
308
+ Daniel Rose, Chia-Chien Hung, Marco Lepri, Israa Alqassem, Kiril Gashteovski, and Carolin Lawrence. 2025. Meddxagent: A unified modular agent framework for explainable automatic differential diagnosis. arXiv preprint arXiv:2502.19175.
309
+
310
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
311
+ Wenqi Shi, Ran Xu, Yuchen Zhuang, Yue Yu, Haotian Sun, Hang Wu, Carl Yang, and May Dongmei Wang. 2024. Medadapter: Efficient test-time adaptation of large language models towards medical reasoning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 22294-22314.
312
+ Gopendra Singh, Sai Vemulapalli, Mauajama Firdaus, and Asif Ekbal. 2024. Deciphering cognitive distortions in patient-doctor mental health conversations: A multimodal lvm-based detection and reasoning framework. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 22546-22570.
313
+ Donald E Stanley and Daniel G Campos. 2013. The logic of medical diagnosis. Perspectives in Biology and Medicine, 56(2):300-315.
314
+ Zhoujian Sun, Cheng Luo, and Zhengxing Huang. 2024. Conversational disease diagnosis via external planner-controlled large language models. arXiv preprint arXiv:2404.04292.
315
+ Kai-Fu Tang, Hao-Cheng Kao, Chun-Nan Chou, and Edward Y Chang. 2016. Inquire and diagnose: Neural symptom checking ensemble using deep reinforcement learning. In NIPS workshop on deep reinforcement learning.
316
+ Yuanhe Tian, Ruyi Gan, Yan Song, Jiaxing Zhang, and Yongdong Zhang. 2024. Chimed-gpt: A chinese medical large language model with full training regime and better alignment to human preferences. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7156-7173.
317
+ Tao Tu, Anil Palepu, Mike Schaekermann, Khaled Saab, Jan Freyberg, Ryutaro Tanno, Amy Wang, Brenna Li, Mohamed Amin, Nenad Tomasev, et al. 2024. Towards conversational diagnostic ai. arXiv preprint arXiv:2401.05654.
318
+ Mina Valizadeh and Natalie Parde. 2022. The ai doctor is in: A survey of task-oriented dialogue systems for healthcare applications. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6638-6660.
319
+ Guoxin Wang, Minyu Gao, Shuai Yang, Ya Zhang, Lizhi He, Liang Huang, Hanlin Xiao, Yexuan Zhang, Wanyue Li, Lu Chen, et al. 2025. Citrus: Leveraging expert cognitive pathways in a medical language model for advanced medical decision support. arXiv preprint arXiv:2502.18274.
320
+ Huimin Wang, Wai-Chung Kwan, Kam-Fai Wong, and Yefeng Zheng. 2023. Coad: Automatic diagnosis
321
+
322
+ through symptom and disease collaborative generation. arXiv preprint arXiv:2307.08290.
323
+
324
+ Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
325
+
326
+ Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuan-Jing Huang, Kam-Fai Wong, and Xiang Dai. 2018. Task-oriented dialogue system for automatic diagnosis. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201-207.
327
+
328
+ Yuan Xia, Jingbo Zhou, Zhenhui Shi, Chao Lu, and Haifeng Huang. 2020. Generative adversarial regularized mutual information policy gradient framework for automatic diagnosis. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 1062-1069.
329
+
330
+ Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, and Liang Lin. 2019. End-to-end knowledge-routed relational dialogue system for automatic diagnosis. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 7346-7353.
331
+
332
+ Lian Yan, Yi Guan, Haotian Wang, Yi Lin, and Jingchi Jiang. 2023. Efficient evidence-based dialogue system for medical diagnosis. In 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 3406-3413. IEEE.
333
+
334
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
335
+
336
+ Xuehan Zhao, Jiaqi Liu, Yao Zhang, Zhiwen Yu, and Bin Guo. 2024. Haformer: Human-ai collaboration framework for disease diagnosis via doctor-enhanced transformer. In ECAI 2024, pages 1495-1502. IOS Press.
337
+
338
+ Zi'ou Zheng, Christopher Malon, Martin Renqiang Min, and Xiaodan Zhu. 2024. Exploring the role of reasoning structures for constructing proofs in multi-step natural language reasoning with large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15299-15312.
339
+
340
+ Cheng Zhong, Kangenbei Liao, Wei Chen, Qianlong Liu, Baolin Peng, Xuanjing Huang, Jiajie Peng, and Zhongyu Wei. 2022. Hierarchical reinforcement learning for automatic disease diagnosis. Bioinformatics, 38(16):3995-4001.
341
+
342
+ Shuang Zhou, Zidu Xu, Mian Zhang, Chunpu Xu, Yawen Guo, Zaifu Zhan, Sirui Ding, Jiashuo Wang, Kaishuai Xu, Yi Fang, et al. 2024. Large language models for disease diagnosis: A scoping review. arXiv preprint arXiv:2409.00097.
343
+
344
+ # A Other Implementation Details
345
+
346
+ # A.1 Datasets Details
347
+
348
+ We use three public medical consultation (MC) datasets: $\mathrm{DXY}^{1}$ (Xu et al., 2019), which is the most widely used online dialogue-based MC dataset in prior work, and $\mathrm{GMD}^{2}$ (Liu et al., 2022) together with $\mathrm{CMD}^{3}$ (Yan et al., 2023), which are representative of electronic medical record-based MC datasets. These datasets were released for academic research and have been de-identified by their original authors. We reviewed the accompanying papers and code repositories for licensing information: GMD is explicitly provide an MIT license, while DXY and CMD, though lacking formal licenses, clearly state in their papers that the data is intended for public research use. In our study, we strictly follow these terms and use the datasets solely for research purposes. All three datasets are primarily in Chinese, with GMD additionally offering an English version.
349
+
350
+ For validation, we apply stratified sampling to the training sets of DXY and CMD to create development sets (GMD has already included a predefined split). The final train/dev/test splits for DXY, GMD, and CMD are 318/103/103, 1912/239/239, and 3379/671/1342, respectively.
351
+
352
+ Since MCRs in DXY and GMD contain relatively few symptoms on average—only 4.74 and 5.55 per record, respectively—this sparsity may hinder the reliable calibration of diagnostic confidence in LLMs. Therefore, when constructing the confidence calibration dataset, we also provide an augmented version of the data to address this issue, enriching MCRs with additional symptoms to enhance the adapter's long-range diagnostic capability. Specifically, for each training MCR with a small number of symptoms, we sample additional implicit symptoms based on disease knowledge extracted from the training data. As a result, the calibration training dataset contains 2,185 instances for DXY (originally 553), 13,598 for GMD (originally 4,837), and 54,608 for CMD.
353
+
354
+ To improve training efficiency, we do not include all diseases in each contrastive batch when constructing confidence calibration data. Instead, each ground-truth disease is paired with four clinically similar candidate diseases for comparison.
355
+
356
+ # A.2 Training Details
357
+
358
+ Hyperparameters and model checkpoints were selected based on the validation dataset, considering both Top-K accuracy and diagnostic effectiveness across diseases. The key configurations are summarized in Table 6, Table 7 and Table 8.
359
+
360
+ During confidence calibration, for Qwen2.5-7B-Instruct, several promising checkpoints were retained as candidate adapters for the subsequent RL training, while for other LLMs only the best-performing checkpoint was used for inference.
361
+
362
+ LLM-RL cascading training can be inherently unstable. To mitigate this, each candidate adapter was employed to train a policy model. The resulting models were then evaluated, and the one achieving the highest Top-K accuracy was selected as the final Policy Agent. This agent was subsequently deployed across different LLM backbones, demonstrating both robustness and transferability.
363
+
364
+ <table><tr><td>Hyperparameters</td><td>DXY</td><td>GMD</td><td>CMD</td></tr><tr><td>max training epochs</td><td>5</td><td>1</td><td>1</td></tr><tr><td>global batch size</td><td>8</td><td>8</td><td>8</td></tr><tr><td>learning rate</td><td>5e-5</td><td>5e-5</td><td>5e-5</td></tr><tr><td>lora rank</td><td>16</td><td>16</td><td>16</td></tr><tr><td>in-bach group length</td><td>5</td><td>5</td><td>5</td></tr></table>
365
+
366
+ Table 6: Hyperparameters for confidence calibration.
367
+
368
+ <table><tr><td>Hyperparameters</td><td>DXY</td><td>GMD</td><td>CMD</td></tr><tr><td>actor hidden layers</td><td>[256,128,128]</td><td>[256,128,128]</td><td>[512,256,256]</td></tr><tr><td>critic hidden layers</td><td>[64]</td><td>[64]</td><td>[128]</td></tr><tr><td>learning rate</td><td>5e-5</td><td>5e-5</td><td>5e-5</td></tr><tr><td>batch size</td><td>64</td><td>128</td><td>128</td></tr><tr><td>steps per update</td><td>1024</td><td>2048</td><td>2048</td></tr><tr><td>epochs</td><td>5</td><td>5</td><td>5</td></tr><tr><td>total steps</td><td>51200</td><td>102400</td><td>102400</td></tr><tr><td>hitting reward</td><td>0.5</td><td>0.5</td><td>0.5</td></tr><tr><td>ranking reward</td><td>0.5</td><td>0.5</td><td>0.5</td></tr><tr><td>diagnosis reward</td><td>1.0</td><td>1.0</td><td>1.0</td></tr><tr><td>frequency penalty</td><td>0.2</td><td>0.2</td><td>0.2</td></tr><tr><td>window size</td><td>3</td><td>5</td><td>5</td></tr><tr><td>floor turns</td><td>3</td><td>6</td><td>5</td></tr></table>
369
+
370
+ Table 7: Hyperparameters for RL policy model's training and inference.
371
+
372
+ <table><tr><td>Hyperparameters</td><td>DXY</td><td>GMD</td><td>CMD</td></tr><tr><td>sampling times</td><td>6</td><td>6</td><td>7</td></tr><tr><td>window size</td><td>3</td><td>4</td><td>5</td></tr><tr><td>floor turns</td><td>3</td><td>5</td><td>5</td></tr><tr><td>retry times</td><td>1</td><td>2</td><td>2</td></tr></table>
373
+
374
+ Table 8: Hyperparameters for LLM-RL collaboration in symptom inquiry.
375
+
376
+ # A.3 Model Deployment
377
+
378
+ We locally deployed the LLMs used in our DDO framework on GPU devices. Specifically, Qwen2.5-7B-Instruct was run on an NVIDIA RTX 3090 GPU, while Qwen2.5-14B-Instruct was run on an NVIDIA Tesla V100 GPU. For reproducing the Chain-of-Diagnosis baseline (Chen et al., 2024), we deployed DiagnosisGPT-6B and DiagnosisGPT-34B using 1 and 3 NVIDIA Tesla V100 GPUs, respectively. For reproducing EBAD (Yan et al., 2023) and HAIformer (Zhao et al., 2024), we used a single NVIDIA RTX 3090 GPU. For implementing the Direct Prompting baseline and reproducing the Uncertainty of Thoughts baseline (Hu et al., 2024), we utilized the Qwen2.5-Instruct API provided by the Siliconflow platform<sup>4</sup> and the ChatGPT API provided by the ChatAnywhere platform<sup>5</sup>.
379
+
380
+ Based on Stable-Baselines $^{36}$ , the reinforcement learning policy model in DDO is trained and deployed on a single NVIDIA RTX 3090 GPU.
381
+
382
+ # B Statistical Results
383
+
384
+ # B.1 Standard Errors and Confidence Intervals
385
+
386
+ We report the standard errors and confidence intervals of the results for the proposed DDO framework on the three MC datasets in Table 9, based on experiments conducted with different random seeds. For efficiency considerations, we use Qwen2.5-7B-Instruct as the LLM backbone. The comparison between our proposed DDO framework and the baseline methods in the main experiments was performed using the random seed that achieved the best overall performance.
387
+
388
+ The statistical results in Table 9 indicate that the diagnostic performance of DDO is relatively stable across different random seeds. The standard errors are very small (below 1.0 on all datasets), and the $95\%$ confidence intervals are also narrow (e.g., only 0.6 on GMD), suggesting that repeated runs of DDO yield relatively consistent results.
389
+
390
+ # B.2 Significance Testing
391
+
392
+ We conducted paired significance tests between the diagnostic performance of our proposed DDO framework and the state-of-the-art baseline HAIformer on the same set of random seeds. On
393
+
394
+ <table><tr><td>Dataset</td><td>Mean Accuracy</td><td>Standard Error</td><td>95% CI (%)</td></tr><tr><td>DXY</td><td>84.4</td><td>0.8</td><td>2.3</td></tr><tr><td>GMD</td><td>79.6</td><td>0.1</td><td>0.6</td></tr><tr><td>CMD</td><td>63.1</td><td>0.4</td><td>1.6</td></tr></table>
395
+
396
+ the DXY dataset, DDO demonstrates a statistically significant improvement over HAIformer $(p < 0.05)$ . However, on the GMD and CMD datasets, DDO achieves lower diagnostic accuracy than HAIformer, and the differences are statistically significant $(p < 0.01)$ . These results highlight potential challenges in scaling LLM-based medical consultation methods to datasets with more diverse disease categories.
397
+
398
+ Table 9: Mean accuracy, standard error (SE), and $95\%$ confidence interval (CI) of DDO for final diagnostic accuracy across three MC datasets.
399
+
400
+ <table><tr><td>Metric</td><td>DXY</td><td>GMD</td><td>CMD</td></tr><tr><td>Paired t-test p-value</td><td>0.0355</td><td>0.0002</td><td>0.0046</td></tr></table>
401
+
402
+ Table 10: Statistical significance of DDO compared to HAIformer across three MC datasets.
403
+
404
+ # C Case Study
405
+
406
+ Table 11 shows a medical consultation case. The patient initially reported the symptom runny nose, and DDO used this information to conduct multiple rounds of symptom inquiries to gather more evidence. In each round, DDO first provides a set of candidate inquiry actions via the Policy Agent. The Inquiry Agent then selects the most appropriate action based on reasoning. The Patient Agent responds with the presence or absence of symptoms based on the Medical Consultation Record (MCR). For symptoms not recorded in the MCR, the Patient Agent infers their likely presence or absence based on the clinical presentation of the disease. The Diagnosis Agent evaluates the diagnostic confidence for each candidate disease. In this case, after reaching the maximum number of interaction rounds, the disease with the highest diagnostic confidence—allergic rhinitis (AR)—was identified as the patient's final diagnosis, which aligns with the ground-truth diagnosis in the MCR.
407
+
408
+ # D Prompt Templates
409
+
410
+ Tables 12-14 show the prompts used in our DDO framework. Tables 15-18 show the prompts used in the DP baseline.
411
+
412
+ Self Report (Explicit Symptoms): The patient has symptoms of cough, restlessness, and runny nose.
413
+
414
+ Explicit Symptoms & Implicit Symptoms: {"cough": True, "restlessness": True, "runny nose": True, "anorexia": True, "short of breath": True}
415
+
416
+ Disease Label: pneumonia
417
+
418
+ Candidate Diseases: ["hand-foot-and mouth disease (HFMD)"], "upper respiratory tract infection (URTI)", "pneumonia", "allergic rhinitis (AR)", "diarrhea"]
419
+
420
+ Diagnostic Confidence of Top Diseases: {"URTI": 0.68, "pneumonia": 0.16, "AR": 0.16}
421
+
422
+ # Turn 1
423
+
424
+ Candidate Actions: ["check for fever", "check for vomiting", "check for short of breath", "check for pharyngeal discomfort"]
425
+
426
+ Selected Symptom for Inquiry: fever
427
+
428
+ Selection Reasoning: First, we examine the current diagnostic confidence scores: upper respiratory infection (0.68), pneumonia (0.16), and allergic rhinitis (0.16). Since upper respiratory infection has a significantly higher confidence, we prioritize symptoms associated with it. Looking at symptom frequencies for this disease, fever (0.507) is the most common, followed by vomiting (0.233), throat discomfort (0.164), and short of breath (0.123). Based on Strategy 1, we should ask the patient about fever next, as it is the most indicative symptom.
429
+
430
+ Symptom Status: True
431
+
432
+ Response Reasoning: To assess whether the patient is likely to have the symptom "fever," we examine its frequency relative to other listed symptoms. A higher frequency—i.e., a higher rank in the sorted list—indicates the symptom is more typical and thus more likely to be present in the patient. According to the information, "fever" has a frequency of 0.533. When we sort all symptoms by frequency in descending order, "fever" ranks 3rd, following "cough" (0.883) and "sputum production" (0.65). This places "fever" in the high-frequency range. Therefore, based on this analysis, we determine the likelihood of the patient having the symptom "fever" to be high, and the final judgment is True.
433
+
434
+ Diagnostic Confidence of Top Diseases: {"URTI": 0.45, "pneumonia": 0.45, "HFMD": 0.11}
435
+
436
+ # Turn 2
437
+
438
+ Candidate Actions: ["check for herpes", "check for sneezing", "check for pharyngeal discomfort", "check for short of breath", "check for vomiting"]
439
+
440
+ Selected Symptom for Inquiry: short of breath
441
+
442
+ Selection Reasoning: The current diagnostic confidence scores are: upper respiratory infection (0.45), pneumonia (0.45), and hand, foot, and mouth disease (0.11). Since no single disease stands out, we can't apply Strategy 1. So we use Strategy 2, which selects symptoms related to the patient's current presentation—cough, irritability, runny nose, and fever. Among the candidate symptoms (herpes, sneezing, throat discomfort, short of breath, vomiting), short of breath is the most relevant, as it is strongly associated with pneumonia. Therefore, we choose short of breath as the next symptom to inquiry about.
443
+
444
+ Symptom Status: True
445
+
446
+ Response Reasoning: Short of breath is recorded in MCR, which indicates that the patient has this symptom.
447
+
448
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.68, "URTI": 0.16, "HFMD": 0.16}
449
+
450
+ # Turn 3
451
+
452
+ Candidate Actions: ["check for pulmonary moist rales", "check for bubble-like spitting", "check for sneeze", "check for vomiting"]
453
+
454
+ Selected Symptom for Inquiry: pulmonary moist rales
455
+
456
+ Selection Reasoning: The current diagnostic confidence scores are: pneumonia (0.68), upper respiratory infection (0.16), and hand, foot, and mouth disease (0.16). Since pneumonia stands out with the highest confidence, we prioritize its typical symptoms. Among the candidate symptoms—pulmonary moist rales (0.167), bubble blowing (0.167), sneezing (0.117), and vomiting (0.383)—both pulmonary moist rales and bubble blowing are more specific to pneumonia, despite vomiting having a higher frequency. Following Strategy 1, we choose pulmonary moist rales as the next symptom to ask the patient.
457
+
458
+ Symptom Status: False
459
+
460
+ Response Reasoning: To assess whether the patient is likely to have the symptom pulmonary moist rales, we first examine its frequency in pneumonia cases and compare it with other symptoms. The frequency of pulmonary moist rales is 0.167. When we sort all listed symptoms by their frequency in pneumonia, pulmonary moist rales ranks relatively low—below common symptoms like cough (0.883), sputum production (0.65), and fever (0.533), and only slightly higher than symptoms like fatigue, loose stools, and sweating. This suggests that pulmonary moist rales is not among the most typical symptoms of pneumonia. Therefore, we conclude that the patient is less likely to have this symptom, and the judgment is False.
461
+
462
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.68, "URTI": 0.16, "diarrhea": 0.16}
463
+
464
+ # Turn 4
465
+
466
+ Candidate Actions: ["check for anorexia", "check for vomiting", "check for lethargy"]
467
+
468
+ Selected Symptom for Inquiry: vomiting
469
+
470
+ Selection Reasoning: The current diagnostic confidence scores are: pneumonia (0.68), upper respiratory infection (0.16), and pediatric diarrhea (0.16). Since pneumonia has the highest confidence, we focus on symptoms related to it. Among the candidate symptoms, vomiting has the highest frequency in pneumonia cases at 0.383, compared to anorexia (0.15) and fatigue (0.083). Based on this, we choose vomiting as the next symptom to ask the patient about.
471
+
472
+ Symptom Status: True
473
+
474
+ Response Reasoning: To determine whether the patient is likely to have the symptom "vomiting," we examine its frequency among pneumonia patients, which is 0.383. Although this is lower than symptoms like cough (0.883), sputum production (0.65), fever (0.533), runny nose (0.467), and short of breath (0.433), it is higher than others such as nasal obstruction (0.267) and irritability (0.233). This indicates that vomiting is relatively common in pneumonia cases. While it may not be the most typical symptom, its frequency suggests it has moderate representativeness. Therefore, we conclude that the likelihood of the patient having this symptom is relatively high, and the judgment result is True.
475
+
476
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.68, "URTI": 0.16, "HFMD": 0.16}
477
+
478
+ # Turn 5
479
+
480
+ Candidate Actions: ["check for rash", "check for expectoration", "check for anorexia", "check for herpes", "check for sneeze"]
481
+
482
+ Selected Symptom for Inquiry:expectoration
483
+
484
+ Selection Reasoning: Pneumonia currently has the highest diagnostic confidence (0.68), much higher than upper respiratory infection and hand-foot-mouth disease (both 0.16). So, we focus on symptoms typical of pneumonia. Among the candidate symptoms, only expectoration (coughing up phlegm) has a high occurrence in pneumonia $(65\%)$ , while others like rash, anorexia, sneezing, and herpes are either rare or not typical. Therefore, the most appropriate next symptom to ask about is expectation.
485
+
486
+ Symptom Status: True
487
+
488
+ Response Reasoning: To determine whether the patient is likely to have the symptom expectoration, we refer to the symptom frequency data for pneumonia. The frequency of expectoration is 0.65, which is relatively high—only lower than cough (0.883) and fever (0.533). This indicates that expectoration is a common and typical symptom of pneumonia. Therefore, it's reasonable to conclude that the patient is likely to have this symptom. Final judgment: True.
489
+
490
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.68, "URTI": 0.16, "diarrhea": 0.16}
491
+
492
+ # Turn 6
493
+
494
+ Candidate Actions: ["check for lethargy", "check for sneeze", "check for nasal obstruction"]
495
+
496
+ Selected Symptom for Inquiry: nasal obstruction
497
+
498
+ Selection Reasoning: First, we look at the current diagnostic confidence scores: pneumonia (0.68), upper respiratory infection (0.16), and pediatric diarrhea (0.16). Since pneumonia has the highest confidence, we prioritize its related symptoms. Among the candidate symptoms, nasal obstruction (0.267) is more common in pneumonia than listlessness (0.083) and sneezing (0.033). According to strategy 1, we should select the most typical symptom, so we choose "nasal obstruction" as the next symptom to ask the patient about. Final answer: choose nasal obstruction as the next symptom to inquire.
499
+
500
+ Symptom Status: False
501
+
502
+ Response Reasoning: To determine whether the patient is likely to have the symptom "nasal obstruction," we examine its frequency in pneumonia cases. The data shows a frequency of 0.267, which ranks 7th among all listed symptoms: lower than common ones like cough (0.883), sputum (0.65), and fever (0.533). Since "nasal obstruction" is not among the most typical symptoms of pneumonia, it is less likely the patient has it. Final judgment: False.
503
+
504
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.66, "URTI": 0.18, "diarrhea": 0.16}
505
+
506
+ # Turn 7
507
+
508
+ Candidate Actions: ["check for anorexia", "check for lethargy", "check for sneeze"]
509
+
510
+ Selected Symptom for Inquiry: anorexia
511
+
512
+ Selection Reasoning: The current diagnostic confidence for diseases is: Pneumonia: 0.66, Upper Respiratory Tract Infection: 0.18, Pediatric Diarrhea: 0.16. Since pneumonia has the highest confidence, we will focus on pneumonia-related symptoms. Among the candidate symptoms, anorexia (0.15), lethargy (0.083), and sneezing (0.117) are considered, with anorexia having the highest frequency. Based on Strategy 1, we will select anorexia as the next symptom to ask the patient.
513
+
514
+ Symptom Status: True
515
+
516
+ Response Reasoning: Anorexia is recorded in MCR, which indicates that the patient has this symptom.
517
+
518
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.45, "URTI": 0.45, "diarrhea": 0.11}
519
+
520
+ # Turn 8
521
+
522
+ Candidate Actions: ["check for sneeze", "check for pharyngeal discomfort", "check for lethargy"]
523
+
524
+ Selected Symptom for Inquiry: sneeze
525
+
526
+ Selection Reasoning: The diagnostic confidence for pneumonia and upper respiratory tract infection is the same (0.45), while pediatric diarrhea is much lower (0.11), so we can't apply Strategy 1. Instead, we use Strategy 2 and focus on symptoms related to the patient's condition. The patient's symptoms include cough, irritability, runny nose, fever, difficulty breathing, vomiting, phlegm, and anorexia. Looking at the frequency of candidate symptoms: Sneezing is more common in upper respiratory tract infection (0.233), while throat discomfort and lethargy appear less frequently across all diseases. Since sneezing is more related to upper respiratory tract infection and the patient's symptoms are mostly respiratory, sneezing helps distinguish between pneumonia and upper respiratory tract infection. Therefore, we should ask about sneezing next.
527
+
528
+ Symptom Status: False
529
+
530
+ Response Reasoning: To determine if the patient is likely to have the symptom 'sneezing', we first look at its frequency in pneumonia and compare it to other symptoms. The frequency of sneezing in pneumonia is 0.117. When we rank all listed symptoms by frequency, sneezing is 13th with a relatively low frequency. The symptoms with higher frequencies include cough (0.883), phlegm (0.65), fever (0.533), and others. Given that sneezing ranks low in the list, it is not considered a typical symptom of pneumonia. Therefore, the conclusion is that the patient is unlikely to have the symptom 'sneezing'.
531
+
532
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.66, "URTI": 0.18, "diarrhea": 0.16}
533
+
534
+ # Turn 9
535
+
536
+ Candidate Actions: ["check for pharyngeal discomfort", "check for lethargy"]
537
+
538
+ Selected Symptom for Inquiry: lethargy
539
+
540
+ Selection Reasoning: Given the diagnostic confidence for pneumonia (0.66), upper respiratory tract infection (0.18), and pediatric diarrhea (0.16), we prioritize symptoms related to pneumonia since its confidence is much higher. The candidate symptoms for pneumonia are lethargy (0.083) and throat discomfort (0.05). Although both have low frequencies, lethargy is more typical of pneumonia, so we choose it as the next symptom to ask the patient about.
541
+
542
+ Symptom Status: False
543
+
544
+ Response Reasoning: Lethargy is not recorded in MCR. This symptom is rarely present in pneumonia, so it is unlikely that the patient has this symptom.
545
+
546
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.68, "URTI": 0.16, "HFMD": 0.16}
547
+
548
+ # Turn 10
549
+
550
+ Candidate Actions: ["check for bubble-like spitting", "check for pulmonary shadow", "check for rash", "check for pharyngeal discomfort"]
551
+
552
+ Selected Symptom for Inquiry: bubble-like spitting
553
+
554
+ Selection Reasoning: Given the diagnostic confidence for pneumonia (0.68), upper respiratory tract infection (0.16), and hand-foot-and-mouth disease (0.16), we prioritize pneumonia-related symptoms since its confidence is significantly higher. The candidate symptoms for pneumonia are bubble-like spitting (0.167), lung shadow (0.167), throat discomfort (0.15), and rash (0.033). Both "bubble-like spitting" and "lung shadow" have the highest occurrence rates, but considering that bubble-like spitting may directly reflect lung dysfunction, it is likely more valuable. Therefore, we should choose "bubble-like spitting" as the next symptom to ask the patient about to further confirm pneumonia.
555
+
556
+ Symptom Status: False
557
+
558
+ Response Reasoning: To determine if the patient is likely to have the symptom "bubble-like spitting," we first check its frequency in pneumonia symptoms. The frequency is 0.167. When we rank all symptoms by frequency, "bubble-like spitting" ranks 9th. The most common symptoms of pneumonia include cough, sputum, and fever, while "bubble-like spitting" is relatively uncommon. Therefore, it's not a typical pneumonia symptom. Based on this, we conclude that the patient is unlikely to have the symptom "bubble-like spitting." The judgment is False.
559
+
560
+ Diagnostic Confidence of Top Diseases: {"pneumonia": 0.68, "URTI": 0.16, "diarrhea": 0.16}
561
+
562
+ Table 11: A medical consultation case by DDO. The symptom status of True and False represent the presence and absence, respectively. "Top Diseases" refers to the diseases with the highest diagnostic confidence, ranked in the top- $w$ positions, with their diagnostic confidence values being processed using Softmax.
563
+
564
+ You are an experienced medical expert. The following information is provided to you:
565
+
566
+ Patient's Symptom Presentation
567
+
568
+ Present symptoms: {positive_symptoms}
569
+
570
+ Absent symptoms: {negative_symptoms}
571
+
572
+ Symptom Knowledge for {candidate_disease}
573
+
574
+ Symptom occurrence frequencies based on existing diagnosed cases of {candidate_disease}: {empirical_knowledge}
575
+
576
+ Based on the patient's symptom presentation and the symptom knowledge of {candidate_disease}, determine whether this disease can be diagnosed. The presence of relevant symptoms increases diagnostic confidence, while the absence of relevant symptoms decreases it. The influence of a symptom's presence or absence on diagnostic confidence increases with its typicality for the disease. There are two possible outputs: True or False. Output True if you believe the disease can be diagnosed; output False if it cannot.
577
+
578
+ Please output only the judgment result, without any additional content.
579
+
580
+ Table 12: The prompt used to estimate diagnostic confidence by BTP in our DDO framework.
581
+
582
+ You are an experienced medical expert. Your task is to help select the next symptom to inquire about from a given set of candidate symptoms, in order to further collect the patient's symptom information. You are provided with the following information:
583
+
584
+ Current known patient symptom status (already inquired symptoms):
585
+
586
+ Present symptoms: {positive_symptoms}
587
+
588
+ Absent symptoms: {negative_symptoms}
589
+
590
+ # Disease diagnostic confidence (confidence values range from 0 to 1; the higher the value, the more likely the disease):
591
+
592
+ Clinical presentation knowledge of diseases (symptom occurrence frequency based on historical case statistics): {top_diseases(empirical_knowledge}
593
+
594
+ Candidate symptoms: {candidate_symptoms}
595
+
596
+ Based on the above information, choose one suitable symptom from the candidate symptoms to ask next. The symptom selection strategies are as follows:
597
+
598
+ Strategy 1 (preferred): If the top-ranked disease has significantly higher diagnostic confidence than the others, choose a symptom from the candidate list that is relatively typical for the top-ranked disease, to help confirm its likelihood.
599
+
600
+ Strategy 2: If no candidate symptom fits Strategy 1, choose a symptom that is relatively related to the patient's current symptom presentation.
601
+
602
+ Output format:
603
+
604
+ If there exists a suitable symptom 'xx' in the candidate symptoms, output: Select 'xx' as the next symptom to inquire about.
605
+
606
+ If no suitable symptom exists in the candidate symptoms, output: New candidate symptoms are needed.
607
+
608
+ Please think step by step.
609
+
610
+ Table 13: The prompt used to select an inquiry action in our DDO framework.
611
+
612
+ You are a patient simulator. The disease that the simulated patient truly has is {disease_label}.
613
+
614
+ The symptom knowledge of disease $\{\text{disease\_label}\}$ is as follows:
615
+
616
+ Based on statistics from previously diagnosed cases of {disease_label}, the symptom occurrence frequencies are: {empirical knowledge}
617
+
618
+ You need to determine whether the patient is likely to have the symptom {inquired_symptom} based on the symptom knowledge of the disease. The judgment should be either True or False:
619
+
620
+ True indicates that the patient is likely to have the symptom.
621
+
622
+ False indicates that the patient is unlikely to have the symptom.
623
+
624
+ The criteria for judgment are as follows:
625
+
626
+ If the symptom {inquired_symptom} is relatively typical for the disease {disease_label} (i.e., it ranks high in the symptom occurrence frequency), then it is considered likely that the patient has the symptom (judgment should be True).
627
+
628
+ If the symptom is not typical for the disease, then the patient is considered unlikely to have it (judgment should be False).
629
+
630
+ Please think step by step and decide whether the patient is likely to have the symptom {inquired_symptom}.
631
+
632
+ Table 14: The prompt used to simulating the patient's response in our DDO framework.
633
+
634
+ You are an experienced medical expert conducting a consultation with a patient.
635
+
636
+ After several rounds of symptom inquiries, the patient has confirmed the following symptoms: {positive_symptoms}. The patient has denied the following symptoms: {negative_symptoms}. The diseases you suspect are: {candidate_diseases}.
637
+
638
+ You need to decide whether to continue asking about symptoms to gather diagnostic evidence or to provide a final diagnosis based on the symptoms reported and your diagnostic knowledge of the diseases. The decision should be one of the following two options: (1) Ask about symptoms (2) Diagnose disease. Please provide your decision directly, without any additional explanation.
639
+
640
+ Decision:
641
+
642
+ Table 15: The prompt for deciding interaction action in the DP baseline.
643
+
644
+ You are an experienced medical expert conducting a consultation with a patient.
645
+
646
+ The symptoms that have already been inquired about and their status are as follows: The symptoms confirmed by the patient: {positive_symptoms}. The symptoms denied by the patient: {negative_symptoms}.
647
+
648
+ To increase diagnostic confidence, you need to choose a symptom to inquire about, ensuring that it has not been previously inquired about. Please provide the name of the symptom directly, without any additional content.
649
+
650
+ The symptom to inquire about:
651
+
652
+ Table 16: The prompt for symptom inquiry in the DP baseline.
653
+
654
+ You will play the role of a patient diagnosed with {disease}.
655
+
656
+ Your symptom presentation is as follows: The symptoms you have: {positive_symptoms}. The symptoms you do not have: {negative_symptoms}.
657
+
658
+ Based on your symptom presentation, please answer truthfully whether you have the symptom {symptom}. The answer should be either 'Yes' or 'No'. Please provide the answer directly without any additional content.
659
+
660
+ Answer:
661
+
662
+ Table 17: The prompt for simulating the patient's response in the DP baseline.
663
+
664
+ You are an experienced medical expert, currently conducting a consultation with a patient.
665
+
666
+ After several rounds of symptom inquiries: The symptoms confirmed by the patient are: {positive_symptoms}. The symptoms denied by the patient are: {negative_symptoms}. The diseases you suspect include: {candidate_diseases}.
667
+
668
+ Please select three diseases from the suspected list above as the diagnosis results, ordered from most to least likely. Provide the answer in the form of a Python string list, and do not include any additional content.
669
+
670
+ Top three most likely diseases:
671
+
672
+ Table 18: The prompt for disease diagnosis in the DP baseline.
2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44db2bab7fb2b0461548a6b4a4b99562492e610f181e3b189381bd62d6633898
3
+ size 566181
2025/DDO_ Dual-Decision Optimization for LLM-Based Medical Consultation via Multi-Agent Collaboration/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/2d212304-cc0a-4f05-ba87-fdea6e7e1bf1_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/2d212304-cc0a-4f05-ba87-fdea6e7e1bf1_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/2d212304-cc0a-4f05-ba87-fdea6e7e1bf1_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84b27e31e36dbc3c36010eb67e3b5c5f190c30b9da8ce9e8b2188c21f8d5dde2
3
+ size 2409062
2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7f5192b8b8a65c148c2232e24e5cd2873e04f1fa78262f6ecd45c346237fb8e
3
+ size 6175728
2025/DEBATE, TRAIN, EVOLVE_ Self‐Evolution of Language Model Reasoning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/d1ecf317-cf42-4c20-9ffa-b0e6f009c87a_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/d1ecf317-cf42-4c20-9ffa-b0e6f009c87a_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/d1ecf317-cf42-4c20-9ffa-b0e6f009c87a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7157d03967ae4184007de9e8a80409f54ca1b3fbc80bc1ff4fcfc46ea0d5a675
3
+ size 562161
2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/full.md ADDED
@@ -0,0 +1,645 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DEL-ToM: Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic
2
+
3
+ Yuheng Wu
4
+
5
+ Jianwen Xie
6
+
7
+ Denghui Zhang $^{3,\dagger}$
8
+
9
+ Zhaozhuo $\mathbf{X}\mathbf{u}^{3,\dagger}$
10
+
11
+ $^{1}$ Stanford University ${ }^{2}$ Lambda, Inc. ${ }^{3}$ Stevens Institute of Technology
12
+
13
+ yuhengwu@stanford.edu jianwen.xie@lambda.ai {dzhang42,zxu79}@stevens.edu
14
+
15
+ # Abstract
16
+
17
+ Theory-of-Mind (ToM) tasks pose a unique challenge for large language models (LLMs), which often lack the capability for dynamic logical reasoning. In this work, we propose DEL-ToM, a framework that improves verifiable ToM reasoning through inference-time scaling rather than architectural changes. Our approach decomposes ToM tasks into a sequence of belief updates grounded in Dynamic Epistemic Logic (DEL), enabling structured and verifiable dynamic logical reasoning. We use data generated automatically via a DEL simulator to train a verifier, which we call the Process Belief Model (PBM), to score each belief update step. During inference, the PBM evaluates candidate belief traces from the LLM and selects the highest-scoring one. This allows LLMs to allocate extra inference-time compute to yield more transparent reasoning. Experiments across model scales and benchmarks show that DEL-ToM consistently improves performance, demonstrating that verifiable belief supervision significantly enhances LLMs' ToM capabilities without retraining. Code is available at https://github.com/joel-wu/DEL-ToM.
18
+
19
+ # 1 Introduction
20
+
21
+ "To know what John knows is to know the worlds that are compatible with his belief, and to know which ones are not." — Jaakko Hintikka (Hintikka and B. P. Hintikka, 1989)
22
+
23
+ The ability to attribute beliefs, desires, and intentions to others, known as Theory-of-Mind (ToM) (Premack and Woodruff, 1978; C. Dennett, 1978; Apperly and Butterfill, 2009), is a fundamental component of social intelligence (Baron-Cohen, 1991). ToM enables agents to reason about what others think, want, or know, and to anticipate their subsequent behavior (Rabinowitz et al., 2018).
24
+
25
+ Recent studies suggest that large language models (LLMs) (Brown et al., 2020) exhibit ToM abilities (Strachan et al., 2024; Lin et al., 2024; Street et al., 2024; Amirizaniani et al., 2024; Sclar et al., 2025; Wu et al., 2025). However, ToM performance follows a scaling law (Kosinski, 2024), with smaller models showing limited ability on ToM tasks. This limitation poses a challenge for low-resource deployments, where edge agents are expected to robustly infer users' intentions and act in alignment with human expectations. At the same time, current evaluations compare only the final output to the ground-truth label (Chen et al., 2024), leaving it unclear whether correct answers result from genuine reasoning or from lucky guessing (Ullman, 2023). Consequently, existing ToM reasoning remains unverifiable and not applicable in practice. This paper addresses the question: How can we enable LLMs to perform verifiable ToM reasoning, especially in low-resource settings?
26
+
27
+ Following process reliabilism (Goldman, 1979), verifiable ToM reasoning requires a sequence of intermediate belief states that reliably support the final conclusion. We formalize this reasoning process using Dynamic Epistemic Logic (DEL) (Baltag et al., 1998; Van Benthem, 2001; Plaza, 2007; Van Ditmarsch et al., 2007; Aucher and Schwarzentruber, 2013), a logic system grounded in the traditions of formal logic and semantics (Frege, 1879; Russell and Whitehead, 1910; Wittgenstein, 1922; Tarski, 1956; Hintikka, 1962; Kripke, 1963). DEL models agents' beliefs with epistemic models, actions with event models, and belief change via product updates, allowing us to view ToM reasoning as dynamic logical reasoning.
28
+
29
+ Within this framework, transparent belief traces are generated and evaluated by a Process Belief Model (PBM). By scoring multiple candidates, the PBM enables us to select the most reliable trace. This constitutes inference-time scaling: spending more computation during inference to obtain more
30
+
31
+ ![](images/63498eddeb61c60d02fc2f9825cbeb0d2a916cdc91e9da7111fe343af1eae944.jpg)
32
+ Figure 1: Overview of the DEL-ToM framework. Each belief state is inferred from the previous state and the current action. The LLM generates multiple candidate belief traces in parallel, and the PBM assigns reward scores to filter a top-scoring subset, which is then used to continue reasoning toward the next belief state.
33
+
34
+ reliable reasoning traces, which in turn allows smaller models to achieve stronger ToM performance while remaining efficient for deployment. We experiment with different trace selection and search strategies for ToM reasoning.
35
+
36
+ To train the PBM, we first generate ToM-related questions and use DEL to produce belief process labels. We then use GPT-4o-mini (Hurst et al., 2024) to answer these questions. Finally, DEL-generated gold labels are used to automatically score GPT-generated traces, producing positive and negative examples for PBM training. Unlike other process-level reward modeling datasets, which rely on human annotation or LLM assistance (Wang et al., 2024), our labels are derived from a formal DEL system, which guarantees correctness.
37
+
38
+ In conclusion, we approach ToM reasoning through the lens of formal logic. Using a PBM trained via DEL, we make each intermediate belief update explicit and employ search-based methods to select the most reliable trace. This enables inference-time scaling and yields dynamic logical reasoning grounded not only in model outputs, but in verifiable, structured belief updates. Our contributions are threefold:
39
+
40
+ - We propose a new perspective on ToM reasoning by framing it as a problem of process reliability. By modeling reasoning as a multi-step dynamic belief-update process, we can apply inference-time scaling to select more reliable belief traces.
41
+ - We formalize ToM reasoning in the framework of DEL and construct a PBM dataset with noise-free supervision derived from DEL. This enables training PBMs for stepwise reasoning evaluation.
42
+ - We evaluate our approach across different model scales and search strategies. Our method consistently improves LLM performance on standard ToM benchmarks.
43
+
44
+ # 2 Background and Motivation
45
+
46
+ ToM in LLMs. Researchers have designed various tasks to evaluate the ToM capabilities of LLMs. Among these, false belief tasks are the most widely used, typically in two forms:
47
+
48
+ - Unexpected Contents: A protagonist is shown an object with misleading external cues (e.g., an opaque crayon box that actually contains candles). The LLM under evaluation must identify that the actual content is candles while recognizing that the protagonist holds the mistaken belief that the box contains crayons.
49
+ - Unexpected Transfer: An object is moved without the protagonist's knowledge, and the LLM must predict where the protagonist will search for it - based on the protagonist's outdated belief.
50
+
51
+ Among the two, the unexpected transfer task is more commonly used. Figure 1 illustrates a typical instance of this task setup.
52
+
53
+ Illustrative Example. As shown in Figure 1, the story consists of four sentences, each describing an action that updates the characters' belief state. The goal of ToM reasoning is to infer the sequence of belief states, culminating in the final belief state.
54
+
55
+ In this example, after Action 1, John, Mary, and Alice are all present in the kitchen, but the chocolate has not been introduced, so no beliefs are yet established. After Action 2, John places the chocolate in the drawer, and everyone present observes this action. Hence, Mary believes that John believes the chocolate is in the drawer. Following Action 3, John exits the kitchen. Then, in Action 4, Mary moves the chocolate to the table, an action that John is unaware of. As a result, Mary thinks John still believes the chocolate is in the drawer.
56
+
57
+ From this example, we see that ToM reasoning can be understood as an action applied to a prior
58
+
59
+ belief state, causing characters to gain or lose information and thereby forming a new state. This process naturally aligns with DEL, which represents each belief state with an epistemic model, each action with an event model, and updates beliefs via the product update by combining the state with an action. Together, these elements yield a formal dynamic-logic system that derives the full belief-state trace over time.
60
+
61
+ Our Objective: Inference-Time Scaling for Verifiable ToM in LLMs. Our goal is to enable LLMs to perform ToM reasoning in an efficient and verifiable manner. To this end, we adopt an inference-time scaling strategy that allocates extra compute during inference to improve the reliability of reasoning. This approach not only enhances the reasoning capability of large models but also allows smaller models to remain deployment-efficient while achieving performance competitive with closed-source LLMs.
62
+
63
+ # 3 Inference-Time Scaling for ToM
64
+
65
+ In this section, we first formulate ToM reasoning as a DEL process. We then describe how the PBM is constructed and trained to evaluate belief traces, and present inference-time scaling pipelines that use the PBM to guide ranking and selection of reasoning traces.
66
+
67
+ # 3.1 Formulating ToM Reasoning within DEL
68
+
69
+ We formulate ToM reasoning within the framework of DEL, which is based on Kripke's possible-world semantics (Kripke, 1963). Let $\mathcal{P}$ be a countable set of atomic propositions, representing basic facts about the world, and let $\mathcal{A}$ be a finite, non-empty set of agents. The epistemic language $\mathcal{L}(\mathcal{P},\mathcal{A})$ is defined by the Backus-Naur form (Knuth, 1964):
70
+
71
+ $$
72
+ \varphi := p \mid \neg \varphi \mid \varphi \wedge \varphi \mid B _ {i} \varphi ,
73
+ $$
74
+
75
+ where $p \in \mathcal{P}$ , $i \in \mathcal{A}$ , and $\varphi$ ranges over well-formed formulas. The formula $B_{i}\varphi$ is read as "agent $i$ believes $\varphi$ ." For example, "John believes the chocolate is in the drawer" can be written as $B_{\mathrm{John}}(\mathrm{chocolate\_in\_}$ drawer). Based on this language, we define epistemic models, event models, and the product update.
76
+
77
+ Definition 1 (Epistemic Model). An epistemic model over agent set $\mathcal{A}$ and proposition set $\mathcal{P}$ is a triple $\mathcal{M} = (W, R, V)$ , where:
78
+
79
+ - $W$ is a set of possible worlds, where each world is a complete valuation of $\mathcal{P}$ ;
80
+
81
+ - $R: \mathcal{A} \to 2^{W \times W}$ assigns each agent $a \in \mathcal{A}$ an accessibility relation $R_a$ ;
82
+ - $V: \mathcal{P} \to 2^W$ maps each atomic proposition $p \in \mathcal{P}$ to the set of worlds where $p$ is true.
83
+
84
+ A state is a pointed epistemic model $(\mathcal{M},w)$ where $w\in W$ is the designated actual world.
85
+
86
+ We write $wR_{a}v$ to denote that world $v$ is accessible from world $w$ according to agent $a$ : in world $w$ , agent $a$ considers $v$ possible.
87
+
88
+ On the basis of an epistemic model $\mathcal{M} = (W, R, V)$ and a designated world $w \in W$ , the satisfaction relation $\models$ for $\mathcal{L}(\mathcal{P}, \mathcal{A})$ is defined as follows:
89
+
90
+ - $\mathcal{M}, w \models p$ iff $w \in V(p)$ ;
91
+ - $\mathcal{M}, w \models B_a\varphi$ iff for all $v \in W$ such that $wR_{a}v$ , we have $\mathcal{M}, v \models \varphi$ .
92
+
93
+ Definition 2 (Event Model). An event model is a tuple $\varepsilon = (E, Q, \text{pre}, \text{post})$ , where:
94
+
95
+ - $E$ is a finite, non-empty set of events;
96
+ - $Q: \mathcal{A} \to 2^{E \times E}$ assigns to each agent $a \in \mathcal{A}$ an indistinguishability relation $Q_a$ over events;
97
+ - pre: $E \to \mathcal{L}(\mathcal{P},\mathcal{A})$ assigns to each $e \in E$ a precondition specifying when $e$ is executable;
98
+ - post: $E \to \mathcal{L}(\mathcal{P},\mathcal{A})$ assigns to each $e \in E$ a postcondition describing how the world changes.
99
+
100
+ We refer to a pointed event model $(\varepsilon, e)$ as an action, where $e \in E$ is the actual event that occurs.
101
+
102
+ Definition 3 (Product Update). Let $(\mathcal{M},w)$ be a state with $\mathcal{M} = (W,R,V)$ , and let $(\varepsilon ,e)$ be an action with $\varepsilon = (E,Q,\mathrm{pre},\mathrm{post})$ . Suppose that the precondition is satisfied, i.e., $\mathcal{M},w\models \mathsf{pre}(e)$ . Then the product update results in a new state $(\mathcal{M}',(w,e))$ , where the updated epistemic model $\mathcal{M}' = (W',R',V')$ is defined as follows:
103
+
104
+ - $W' = \{(w', e') \in W \times E \mid \mathcal{M}, w' \models \operatorname{pre}(e')\}$ ;
105
+ - For each $a \in \mathcal{A}$ , $R_a' = \{((w', e'), (v', f')) \in W' \times W' \mid w' R_a v' \wedge e' Q_a f'\}$ ;
106
+ - $(w', e') \in V'(p)$ iff $\operatorname{post}(e') \models p$ or $(\mathcal{M}, w' \models p \land \operatorname{post}(e') \not\models \neg p)$ , for each $p \in \mathcal{P}$ .
107
+
108
+ Applying DEL to ToM Reasoning. We illustrate States 4-6 in Figure 2. In State 4, both Mary and Alice are present and observe that the chocolate is on the table, so $\mathcal{M}, w_4 \models$ table and $R_M = R_A = \{(w_4, w_4)\}$ . After Action 5, Mary exits the kitchen, $\mathrm{pre}(e_5) = \top$ , $\mathrm{post}(e_5) = \text{table}$ , so facts remain unchanged but Mary will not observe subsequent actions. In Action 6, Alice moves the chocolate to the cupboard with $\mathrm{pre}(e_6) = \top$ and $\mathrm{post}(e_6) = \text{cupboard} \land \neg \text{table}$ . After the product updates, the actual state $(\mathcal{M}, w_6)$ satisfies
109
+
110
+ ![](images/8520c08925f2a60980732c17aae817c5332fd86e3c78c947528774ca57b34198.jpg)
111
+ Figure 2: Training data synthesis for PBM. The right part illustrates the accessibility relations generated by the DEL simulator.
112
+
113
+ $\mathcal{M}, w_6 \models$ cupboard; Alice's accessibility relation $R_A'$ points to cupboard-worlds, while Mary's relation $R_M'$ still reaches the table-world. Hence
114
+
115
+ $$
116
+ \begin{array}{l} \mathcal {M}, w _ {6} \vDash B _ {\text {M a r y}} B _ {\text {A l i c e}} \varphi \\ \iff \forall v (w _ {6} R _ {M} ^ {\prime} v \Rightarrow \mathcal {M}, v \vDash B _ {A} \varphi) \\ \iff \mathcal {M}, w _ {4} \vDash B _ {\text {A l i c e}} \varphi \\ \iff \forall u (w _ {4} R _ {A} ^ {\prime} u \Rightarrow \mathcal {M}, u \vDash \varphi) \\ \Longleftrightarrow \mathcal {M}, w _ {4} \vDash \varphi , \\ \end{array}
117
+ $$
118
+
119
+ where $\varphi$ denotes "the chocolate is on the table." Thus, Mary believes that Alice believes it is on the table. This illustrates that the core of DEL reasoning lies in constructing the accessibility relations $R$ at each state, finding the worlds that are compatible with an agent's belief (Hintikka and B. P. Hintikka, 1989).
120
+
121
+ # 3.2 Building the PBM with DEL
122
+
123
+ Generating Process-Level Labels via DEL. We integrate a DEL simulator into the Hi-ToM generators $^{1}$ (Wu et al., 2023) and synthesize 20,000 ToM stories with process labels. For each story, we build process-level traces across different orders of belief: at each action, we update the accessibility relations $R$ based on the action's semantics and whether the observation is public or private, then update $R$ accordingly and record the belief state in the trace set. All process-level label generation code is integrated into the Hi-ToM generators and included in our released codebase.
124
+
125
+ Dataset Assembly. For each synthesized story, we prompt GPT-4o-mini (Hurst et al., 2024) to produce step-by-step belief updates in a DEL format
126
+
127
+ (the prompt is provided in Appendix A). We pair each LLM trace with the DEL per-step labels to form training instances, yielding both positive and negative supervision for process-level reward modeling.
128
+
129
+ Training the PBM. PBM is a scoring function $f: \mathcal{Q} \times S \to \mathbb{R}^+$ that assigns a score to each step $s_i$ in a GPT-4o-mini-generated belief trace $s$ , given a ToM problem $q$ . We treat this as a binary classification task: each step is labeled as either correct or incorrect according to the DEL-generated belief trace. The model is trained using the following binary cross-entropy loss:
130
+
131
+ $$
132
+ \begin{array}{l} \mathcal {L} _ {\mathrm {P B M}} = - \sum_ {i = 1} ^ {K} y _ {s _ {i}} \log f (s _ {i}) \\ - \sum_ {i = 1} ^ {K} (1 - y _ {s _ {i}}) \log (1 - f (s _ {i})), \\ \end{array}
133
+ $$
134
+
135
+ where $K$ is the number of steps, $y_{s_i}$ is the binary label, and $f(s_i)$ is the predicted score. The training code is adopted from the RLHF-Reward-Modeling codebase $^2$ .
136
+
137
+ # 3.3 Inference-Time Scaling Pipeline
138
+
139
+ Beam Search. Beam search is a decoding method that maintains multiple partial belief traces during generation (Figure 1): at each action, the LLM observes the trace so far and proposes multiple candidate belief updates for the current state. The PBM scores these candidates, and a high-scoring subset is selected to continue reasoning. This process repeats until all actions are processed. Formally, the procedure is as follows:
140
+
141
+ - Initialize $k$ beams with candidate first-step updates sampled from the model.
142
+ - Expand each beam with $b$ next-step candidates, yielding $k \times b$ partial paths.
143
+ - Score each path with the PBM, ranking by the score of the most recent step.
144
+ - Retain the top $k$ paths and iterate until reaching an end-of-sequence or the maximum depth.
145
+
146
+ Best-of-N (BoN). Alternatively, instead of updating step by step, the LLM may generate $N$ complete belief traces after reading the entire story. The PBM scores each step in these traces, aggregates the step-wise scores into a process-level reward, and reranks the candidates to identify the most reliable trace as the final output. We experiment with different aggregation rules for computing the trace-level score:
147
+
148
+ - Last: Use the PBM score of the final step.
149
+ Min: Use the lowest score across all steps.
150
+ - Avg: Use the average score across the trace.
151
+ - Prod: Multiply the scores of all steps.
152
+ Majority: Select the final answer by simple majority voting across traces, without using PBM.
153
+
154
+ Based on the aggregated scores, we consider two ranking strategies:
155
+
156
+ - Vanilla BoN: Select the single trace with the highest PBM score.
157
+ - Weighted BoN: Group traces by their final answers, yielding a candidate set $\mathcal{Y} = \{y_1, y_2, \ldots\}$ . We then sum PBM scores within each group and select the answer $\hat{y}$ with the highest total:
158
+
159
+ $$
160
+ \hat {y} = \arg \max _ {y \in \mathcal {Y}} \sum_ {i = 1} ^ {N} \mathbb {1} (y _ {i} = y) \cdot \operatorname {P B M} (p, t _ {i}),
161
+ $$
162
+
163
+ where $t_i$ is the $i$ -th trace, $y_i$ denotes the trace's final answer, and $\mathrm{PBM}(p, t_i)$ is its score.
164
+
165
+ # 4 Experiments
166
+
167
+ # 4.1 Experimental Setup
168
+
169
+ Platform. All experiments are conducted on a single NVIDIA GH200 GPU node. We use the vLLM (Kwon et al., 2023) framework for efficient batched inference and large-scale decoding.
170
+
171
+ PBM Training. We fine-tune a PBM model based on Llama3.1-8B-Instruct (Grattafori et al., 2024). The model is trained for 1 epoch using our synthesized dataset.
172
+
173
+ Test Models. We evaluate our methods on both the Qwen3 series (0.6B, 1.7B, 4B, 8B) (Yang et al., 2025) and the Llama3.2 series (1B, 3B) (Grattafori
174
+
175
+ et al., 2024), as well as closed-source models including gpt-4.1, gpt-4o, gpt-4.1-mini, and gpt-4o-mini. For comparison, we also report results from baselines such as o4-mini, gpt-4.1-nano, Qwen3-235B-A22B (Yang et al., 2025), DeepSeek-V3 (Liu et al., 2024), and OLMo-2-0325-32B (Walsh et al., 2025). All models are evaluated under their default generation settings.
176
+
177
+ Datasets. We conduct evaluations on two datasets: Hi-ToM (Wu et al., 2023) and the ToM tasks introduced by Kosinski (Kosinski, 2024). For Hi-ToM, we only evaluate one-chapter stories, and for Kosinski's dataset we restrict evaluation to the unexpected transfer task.
178
+
179
+ Metrics and Prompt Format. We report final answer accuracy as the main evaluation metric. All models are evaluated using a consistent prompting format, as detailed in Appendix A.
180
+
181
+ # 4.2 Results on Hi-ToM Dataset
182
+
183
+ For BoN, we scale $N$ up to 1024 and apply the weighted strategy, selecting the best aggregation rule for each instance. For beam search, we evaluate Qwen3-4B and Qwen3-8B with beam sizes from 4 to 256, excluding smaller models since they cannot generate valid intermediate reasoning steps.
184
+
185
+ Main Results. As shown in Table 1, incorporating PBM consistently improves ToM reasoning across both BoN and beam search. For example, Llama3.2-3B gains 33.6 points in average accuracy, while Qwen3-4B improves by 9.4 points in the BoN setting. Similarly, with beam search, Qwen3-8B, whose baseline underperforms Qwen3-4B, achieves the highest accuracy of 87.0 once guided by PBM. Moreover, our method generalizes to both open- and closed-source models, as the gpt series also shows clear gains with PBM.
186
+
187
+ Comparison with SOTA LLMs. As shown in Table 2, smaller open-source models can match or surpass much larger LLMs. For example, Qwen3-4B+PBM achieves higher average accuracy than gpt-4.1, DeepSeek-V3, and OLMo-32B, while Llama3.2-3B+PBM performs on par with gpt-4.1-mini. These findings highlight the effectiveness of PBM in scaling ToM reasoning.
188
+
189
+ Scaling Test-Time Compute for ToM Reasoning. As shown in Figure 3, increasing the number of sampled belief traces $N$ improves ToM performance only when guided by PBM. Among aggregation strategies, min and prod are the most reliable, while avg and last often degrade under weighted aggregation. In contrast, majority voting fails to
190
+
191
+ Table 1: Inference-time scaling across belief orders in the Hi-ToM dataset using BoN and Beam Search. "Ori" denotes baseline accuracy, and "+" denotes accuracy with inference-time scaling.
192
+
193
+ <table><tr><td rowspan="2">Model</td><td colspan="2">0-th Order</td><td colspan="2">1-th Order</td><td colspan="2">2-th Order</td><td colspan="2">3-th Order</td><td colspan="2">4-th Order</td><td colspan="2">Average</td></tr><tr><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td></tr><tr><td colspan="13">BoN (N = 1024)</td></tr><tr><td>Qwen3-4B</td><td>100.0</td><td>100.0</td><td>79.8</td><td>85.0</td><td>79.3</td><td>90.0</td><td>70.2</td><td>82.5</td><td>46.0</td><td>65.0</td><td>75.1</td><td>84.5</td></tr><tr><td>Qwen3-1.7B</td><td>78.0</td><td>82.5</td><td>59.7</td><td>65.0</td><td>45.2</td><td>55.0</td><td>47.0</td><td>62.5</td><td>47.8</td><td>57.5</td><td>55.5</td><td>64.5</td></tr><tr><td>Qwen3-0.6B</td><td>69.2</td><td>80.0</td><td>52.0</td><td>72.5</td><td>35.0</td><td>47.5</td><td>31.5</td><td>52.5</td><td>34.0</td><td>47.5</td><td>44.3</td><td>60.0</td></tr><tr><td>Llama3.2-3B</td><td>68.2</td><td>85.0</td><td>52.0</td><td>80.0</td><td>43.2</td><td>82.5</td><td>37.0</td><td>82.5</td><td>36.8</td><td>75.0</td><td>47.4</td><td>81.0</td></tr><tr><td>Llama3.2-1B</td><td>41.5</td><td>46.2</td><td>40.0</td><td>53.8</td><td>28.5</td><td>61.5</td><td>41.5</td><td>84.6</td><td>29.2</td><td>58.3</td><td>36.1</td><td>60.9</td></tr><tr><td colspan="13">BoN (N = 4)</td></tr><tr><td>gpt-4.1</td><td>95.0</td><td>97.5</td><td>85.0</td><td>87.5</td><td>85.0</td><td>92.5</td><td>82.5</td><td>95.0</td><td>70.0</td><td>77.5</td><td>83.5</td><td>90.0</td></tr><tr><td>gpt-4.1-mini</td><td>77.5</td><td>70.0</td><td>90.0</td><td>85.0</td><td>70.0</td><td>75.0</td><td>75.0</td><td>92.5</td><td>77.5</td><td>92.5</td><td>78.0</td><td>83.0</td></tr><tr><td>gpt-4o</td><td>100.0</td><td>100.0</td><td>85.0</td><td>90.0</td><td>82.5</td><td>92.5</td><td>90.0</td><td>97.5</td><td>77.5</td><td>85.0</td><td>87.0</td><td>93.0</td></tr><tr><td>gpt-4o-mini</td><td>90.0</td><td>100.0</td><td>75.0</td><td>87.5</td><td>77.5</td><td>95.0</td><td>77.5</td><td>100.0</td><td>55.0</td><td>85.0</td><td>75.0</td><td>93.5</td></tr><tr><td colspan="13">Beam Search (N = 256)</td></tr><tr><td>Qwen3-8B</td><td>96.5</td><td>80.0</td><td>53.3</td><td>80.0</td><td>38.8</td><td>85.0</td><td>55.8</td><td>95.0</td><td>57.8</td><td>95.0</td><td>60.4</td><td>87.0</td></tr><tr><td>Qwen3-4B</td><td>100.0</td><td>100.0</td><td>79.8</td><td>85.0</td><td>79.3</td><td>97.5</td><td>70.2</td><td>82.5</td><td>46.0</td><td>60.0</td><td>75.1</td><td>85.0</td></tr></table>
194
+
195
+ Table 2: Comparison with SOTA LLMs on Hi-ToM (BoN, $N = 1024$ ). “+PBM” denotes accuracy with inference-time scaling.
196
+
197
+ <table><tr><td>Model</td><td>0-th</td><td>1-th</td><td>2-th</td><td>3-th</td><td>4-th</td><td>Avg.</td></tr><tr><td>o4-mini</td><td>97.5</td><td>95.0</td><td>77.5</td><td>87.5</td><td>85.0</td><td>88.5</td></tr><tr><td>gpt-4o</td><td>100.0</td><td>85.0</td><td>82.5</td><td>90.0</td><td>77.5</td><td>87.0</td></tr><tr><td>Qwen3-4B+PBM</td><td>100.0</td><td>85.0</td><td>90.0</td><td>82.5</td><td>65.0</td><td>84.5</td></tr><tr><td>Qwen3-235B-A22B</td><td>100.0</td><td>75.0</td><td>85.0</td><td>85.0</td><td>75.0</td><td>84.0</td></tr><tr><td>gpt-4.1</td><td>95.0</td><td>85.0</td><td>85.0</td><td>82.5</td><td>70.0</td><td>83.5</td></tr><tr><td>DeepSeek-V3</td><td>100.0</td><td>80.0</td><td>90.0</td><td>70.0</td><td>72.5</td><td>82.5</td></tr><tr><td>Llama3.2-3B+PBM</td><td>85.0</td><td>80.0</td><td>82.5</td><td>82.5</td><td>75.0</td><td>81.0</td></tr><tr><td>gpt-4.1-mini</td><td>77.5</td><td>90.0</td><td>70.0</td><td>75.0</td><td>77.5</td><td>78.0</td></tr><tr><td>gpt-4o-mini</td><td>90.0</td><td>75.0</td><td>77.5</td><td>77.5</td><td>55.0</td><td>75.0</td></tr><tr><td>Qwen3-1.7B+PBM</td><td>82.5</td><td>65.0</td><td>55.0</td><td>62.5</td><td>57.5</td><td>64.5</td></tr><tr><td>OLMo-32B</td><td>77.5</td><td>60.0</td><td>60.0</td><td>65.0</td><td>52.5</td><td>63.0</td></tr><tr><td>Llama3.2-1B+PBM</td><td>46.2</td><td>53.8</td><td>61.5</td><td>84.6</td><td>58.3</td><td>60.9</td></tr><tr><td>Qwen3-0.6B+PBM</td><td>80.0</td><td>72.5</td><td>47.5</td><td>52.5</td><td>47.5</td><td>60.0</td></tr><tr><td>gpt-4.1-nano</td><td>22.5</td><td>32.5</td><td>42.5</td><td>27.5</td><td>30.0</td><td>31.0</td></tr></table>
198
+
199
+ improve accuracy, since ToM requires evaluating intermediate belief states rather than aggregating final answers. A theoretical analysis of this limitation is provided in Appendix B.
200
+
201
+ BoN vs. Beam Search. Our experiments show that these two inference-time strategies achieve comparable accuracy. However, beam search rollouts often fail on smaller or weaker models that cannot reliably produce valid intermediate states, making PBM evaluation infeasible. In contrast, BoN generates full belief traces in one shot, where PBM remains effective even when some steps are noisy, and large candidate sets can be produced efficiently using high-throughput backends such as vLLM. We therefore recommend BoN as the preferred inference-time scaling method for ToM reasoning.
202
+
203
+ # 4.3 Results on Out-of-Distribution ToM Data
204
+
205
+ Our PBM is trained on Hi-ToM-style synthetic data, but we ask: Can it generalize to ToM tasks from a different distribution? To test this, we evaluate it on the dataset from Kosinski (Kosinski, 2024), which contains hand-written scenarios with false-belief and true-belief controls. We experiment with the Qwen3 series, following the same inference-time scaling and PBM-based selection procedure as before.
206
+
207
+ Main Results. As shown in Table 3, PBM improves accuracy across all models, confirming its ability to generalize beyond synthetic Hi-ToM scenarios. This shows that PBM functions as a genuine verifier of whether a ToM reasoning process is justified, rather than overfitting to the training distribution, and highlights its robustness on out-of-domain ToM tasks.
208
+
209
+ # 4.4 Benchmarking the PBM
210
+
211
+ To assess the PBM's standalone reliability, we construct a held-out test set of 2,000 multi-step reasoning examples generated by gpt-4o-mini and spanning all belief orders. Each step is labeled by the DEL simulator as either correct or incorrect, and the PBM is evaluated by its step-level classification accuracy. We benchmark two PBMs trained on different base models: Llama3.1-8B-Instruct and Llama3.2-3B-Instruct.
212
+
213
+ Evaluating PBM. As shown in Table 4, the larger PBM achieves consistently higher accuracy, and performance decreases as the belief order increases.
214
+
215
+ ![](images/e82aa0816d62962dabac01bf83cc1b110b4b788d098c5edeb2c7309b33503e57.jpg)
216
+ (a) Vanilla BoN decoding on Qwen3-4B.
217
+
218
+ ![](images/2008eadbd831f050cdac90a7e2ea8ae8716ba49607b1536b1aa85c522da3abe7.jpg)
219
+ (b) Weighted BoN decoding on Qwen3-4B.
220
+ Figure 3: Accuracy of BoN decoding on Qwen3-4B across different budgets $N$ in the Hi-ToM dataset. Results are shown for (a) Vanilla and (b) Weighted aggregation strategies.
221
+
222
+ Table 3: BoN ( $N = 1024$ ) inference-time scaling on the dataset from Kosinski (Kosinski, 2024), evaluated across different belief types. "Ori" denotes baseline accuracy; "+"PBM" denotes accuracy with inference-time scaling.
223
+
224
+ <table><tr><td rowspan="2">Model</td><td colspan="2">False Belief</td><td colspan="2">Informed Protagonist</td><td colspan="2">No Transfer</td><td colspan="2">Present Protagonist</td><td colspan="2">Average</td></tr><tr><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td><td>Ori</td><td>+PBM</td></tr><tr><td>Qwen3-8B</td><td>83.3</td><td>87.5</td><td>83.8</td><td>85.0</td><td>92.8</td><td>97.5</td><td>79.5</td><td>85.0</td><td>84.8</td><td>88.8</td></tr><tr><td>Qwen3-4B</td><td>70.2</td><td>80.0</td><td>86.2</td><td>90.0</td><td>93.2</td><td>95.0</td><td>88.0</td><td>92.5</td><td>84.4</td><td>89.4</td></tr><tr><td>Qwen3-1.7B</td><td>18.2</td><td>35.0</td><td>15.5</td><td>37.5</td><td>24.8</td><td>60.0</td><td>13.8</td><td>30.0</td><td>18.1</td><td>40.6</td></tr><tr><td>Qwen3-0.6B</td><td>14.5</td><td>12.5</td><td>23.5</td><td>30.0</td><td>25.0</td><td>35.0</td><td>21.0</td><td>32.5</td><td>21.0</td><td>27.5</td></tr></table>
225
+
226
+ Table 4: PBM classification accuracy (%) across belief orders on the test set.
227
+
228
+ <table><tr><td>PBM</td><td>0-th</td><td>1-th</td><td>2-th</td><td>3-th</td><td>4-th</td><td>Avg.</td></tr><tr><td>Llama3.1-8B</td><td>99.2</td><td>94.6</td><td>89.0</td><td>87.0</td><td>79.9</td><td>90.0</td></tr><tr><td>Llama3.2-3B</td><td>99.1</td><td>91.9</td><td>84.9</td><td>83.8</td><td>73.8</td><td>86.7</td></tr></table>
229
+
230
+ Table 5: BoN inference-time scaling accuracy (%) on Hi-ToM using different PBMs.
231
+
232
+ <table><tr><td>Model+PBM</td><td>0-th</td><td>1-th</td><td>2-th</td><td>3-th</td><td>4-th</td><td>Avg.</td></tr><tr><td>Qwen3-4B + 8B</td><td>100.0</td><td>85.0</td><td>90.0</td><td>82.5</td><td>65.0</td><td>84.5</td></tr><tr><td>Qwen3-4B + 3B</td><td>100.0</td><td>77.5</td><td>77.5</td><td>72.5</td><td>47.5</td><td>75.0</td></tr><tr><td>Qwen3-1.7B + 8B</td><td>82.5</td><td>65.0</td><td>55.0</td><td>62.5</td><td>57.5</td><td>64.5</td></tr><tr><td>Qwen3-1.7B + 3B</td><td>82.5</td><td>60.0</td><td>45.0</td><td>47.5</td><td>50.0</td><td>57.0</td></tr><tr><td>Qwen3-0.6B + 8B</td><td>80.0</td><td>72.5</td><td>47.5</td><td>52.5</td><td>47.5</td><td>60.0</td></tr><tr><td>Qwen3-0.6B + 3B</td><td>77.5</td><td>55.0</td><td>27.5</td><td>35.0</td><td>32.5</td><td>45.5</td></tr></table>
233
+
234
+ This suggests that stronger models can better verify reasoning steps, while evaluating deeper recursive beliefs is inherently more challenging.
235
+
236
+ Impact of PBM Quality on Task Accuracy. We further test how the quality of the PBM affects end-task performance. Specifically, we run BoN inference-time scaling on the Hi-ToM dataset using different base models, guided either by a strong PBM (Llama3.1-8B-Instruct) or a weaker one (Llama3.2-3B-Instruct). As shown in Table 5, replacing the strong PBM with a weaker one consistently reduces accuracy across all base models and belief orders. This establishes a clear link be
237
+
238
+ tween verifier quality and final task performance: a stronger PBM leads to better inference-time scaling outcomes.
239
+
240
+ Qualitative Analysis of PBM Behavior. To better understand when PBM succeeds or fails, we examine its behavior on reasoning traces. Below are two steps predicted by the Llama3.2-3B-Instruct PBM.
241
+
242
+ Scenario: Initially, everyone knows that the asparagus is in the blue_cupboard. At the current moment, Charlotte and Elizabeth are present in the room, while Alexander has just left. Charlotte holds a second-order belief about Alexander's belief regarding Elizabeth.
243
+
244
+ # Step $n$
245
+
246
+ Action: Elizabeth likes the red_box.
247
+ - State: Irrelevant. Charlotte thinks Alexander thinks Elizabeth thinks the asparagus is in blue_cupboard.
248
+ - Prediction: + Ground Truth: +
249
+ - Annotation: This step is correct. The statement is unrelated to the asparagus; no beliefs update. PBM correctly captures this invariance.
250
+
251
+ # Step $n + 1$
252
+
253
+ - Action: Elizabeth moved the asparagus to the green_bucket.
254
+ - State: Only Elizabeth and Charlotte are present when this happens. Charlotte sees this move. Charlotte thinks Alexander thinks Elizabeth
255
+
256
+ Table 6: API price per 1M tokens.
257
+
258
+ <table><tr><td>Model</td><td>Input</td><td>Cached Input</td><td>Output</td><td>Total</td></tr><tr><td>gpt-4.1</td><td>$2.00</td><td>$0.50</td><td>$8.00</td><td>$10.50</td></tr><tr><td>gpt-4.1-mini</td><td>$0.40</td><td>$0.10</td><td>$1.60</td><td>$2.10</td></tr><tr><td>gpt-4o</td><td>$2.50</td><td>$1.25</td><td>$10.00</td><td>$13.75</td></tr><tr><td>gpt-4o-mini</td><td>$0.15</td><td>$0.075</td><td>$0.60</td><td>$0.825</td></tr></table>
259
+
260
+ thinks the asparagus is in green_bucket.
261
+
262
+ - Prediction: + Ground Truth: -
263
+ - Annotation: This step is incorrect. Since Alexander is not present, he cannot observe Elizabeth's action. Therefore, his beliefs (as perceived by Charlotte) should not change. PBM overgeneralizes belief update based on partial presence.
264
+
265
+ This example shows that while PBM handles simple irrelevant statements, it can fail on nested, perspective-sensitive updates, revealing a key challenge in verifying multi-agent reasoning.
266
+
267
+ # 4.5 Discussion
268
+
269
+ Cost Efficiency for API-based Usage. As shown in Table 1, applying PBM narrows the gap between small and large models: gpt-4.1-mini approaches gpt-4.1, while gpt-4o-mini gains +18.5 points, surpassing gpt-4o. Despite sampling $N = 4$ outputs, mini models remain more cost-efficient, with per million-token costs of only 2.10 and 0.825 compared to 10.50 and 13.75 for the larger models (Table 6). Furthermore, because all $N$ samples share the same input prompt, the input cost is paid only once, and only the output tokens scale with $N$ . This makes PBM-guided small-batch inference-time scaling a cheaper alternative to using larger models.
270
+
271
+ Scaling with Model Size. Figure 4 shows how ToM accuracy changes with model size. PBM consistently improves performance and strengthens the scaling trend. For Llama 3.2, the accuracy curve becomes steeper when equipped with PBM, suggesting that larger models benefit more and generalize better under our inference-time intervention. Interestingly, Qwen3-8B performs worse than Qwen3-4B under the vanilla setting, but becomes the best-performing variant once PBM is applied. This indicates that PBM not only boosts accuracy but can also unlock higher-order reasoning abilities that remain latent in the base model.
272
+
273
+ Comparison with RL-based Methods. Recent work (Lu et al., 2025) has explored fine-tuning LLMs with ToM supervision using GRPO (Shao et al., 2024) to enhance their ToM abilities. How-
274
+
275
+ ![](images/ce2a3599a90072a9754e1713a1ead74d7421587caa28861c7442b026ee588c5d.jpg)
276
+ Figure 4: Scaling trend of average accuracy before and after applying PBM across different LLMs on Hi-ToM. "Ori" denotes baseline accuracy; $^{+}\mathrm{PBM}^{*}$ denotes accuracy with inference-time scaling.
277
+
278
+ ever, GRPO requires substantial compute and is notoriously difficult to optimize. In contrast, our PBM is lightweight and efficient: it trains in under three hours on a single GH200 GPU and can be applied to any target model without retraining. GRPO must be re-trained for each model and may even degrade performance on unrelated tasks such as GSM8K (Lu et al., 2025). Our method avoids this issue entirely by leaving model parameters unchanged. PBM thus offers a practical, generalizable, and non-invasive alternative for improving ToM reasoning.
279
+
280
+ # 5 Related Work
281
+
282
+ DEL and Its Connections to ToM. DEL builds on a line of work in epistemic logic, tracing back to Hintikka's possible-world model of knowledge and belief (Hintikka, 1962) and Kripke's formal semantics (Kripke, 1963), and later evolving through studies on information change (Baltag et al., 1998). It was later formalized as a unified framework of epistemic and event models with product updates (Van Ditmarsch et al., 2007) for representing and updating agents' beliefs. This aligns naturally with the core of ToM, which concerns reasoning about others' beliefs. Early cognitive models used DEL to simulate belief change in multi-agent settings (Bolander and Andersen, 2011), showing its suitability for structured belief reasoning. More recent work uses logic-based simulators to supply symbolic supervision for belief updates (Bolander, 2014; Hansen and Bolander, 2020). Building on this line, we use DEL not only as a formalism for modeling beliefs but also as a scaffold for inference-time scaling, enabling compositional and verifiable reasoning in ToM tasks.
283
+
284
+ Inference-Time Scaling of LLMs. Recent work investigates inference-time scaling as an alternative to increasing model size for improving reasoning capabilities (Beeching et al., 2024; Muennighoff et al., 2025). Two main paradigms have been studied. One is single-trace scaling, which encourages deeper reasoning within a single inference path, often via reinforcement learning (Guo et al., 2025a; Cheng et al., 2025) or distillation from a stronger teacher (Li et al., 2025). The other is multi-trace scaling (Brown et al., 2024; Snell et al., 2025; Schaeffer et al., 2025), which generates multiple reasoning traces in parallel and selects the best outcome using voting (Wang et al., 2023, 2025) or external verifiers (Wang et al., 2024; Sun et al., 2024; Guo et al., 2025b; Saad-Falcon et al., 2025). Recent work further combines multi-trace generation with search algorithms such as tree search and beam search to refine reasoning step by step (Zhang et al., 2024; Lin et al., 2025). Our approach follows the multi-trace paradigm and introduces PBM-guided selection, extending inference-time scaling to ToM tasks.
285
+
286
+ # 6 Conclusion
287
+
288
+ This work introduces DEL-ToM, a framework that enhances Theory-of-Mind (ToM) reasoning in LLMs through inference-time scaling. By modeling belief updates with Dynamic Epistemic Logic (DEL) and training a verifier using DEL-generated labels, our approach enables structured and verifiable dynamic logical reasoning. DEL-ToM improves ToM performance across models and datasets, demonstrating that logical reasoning can be strengthened through formal logic and inference-time supervision. This opens new avenues for deploying ToM-capable LLMs in resource-constrained settings without retraining.
289
+
290
+ # Limitation
291
+
292
+ Our approach depends on accurate belief supervision from a formal-logic-based simulator. Such supervision may not generalize to all types of reasoning or real-world language use. Additionally, beam search is less effective for models with weak instruction-following capabilities, limiting their practical deployment. Future work could explore more efficient trace selection methods and extend our approach to broader domains beyond ToM.
293
+
294
+ # Acknowledgment
295
+
296
+ We thank the anonymous reviewers for their valuable feedback. We gratefully acknowledge the support of Lambda, Inc., for providing compute resources for this project. The work of Zhaozhuo Xu was supported by NSF grants 2451398 and 2450524.
297
+
298
+ # References
299
+
300
+ Maryam Amirizaniani, Elias Martin, Maryna Sivachenko, Afra Mashhadi, and Chirag Shah. 2024. Do llms exhibit human-like reasoning? evaluating theory of mind in llms for open-ended responses. arXiv preprint arXiv:2406.05659.
301
+ Ian A Apperly and Stephen A Butterfill. 2009. Do humans have two systems to track beliefs and belief-like states? Psychological review, 116(4):953.
302
+ Guillaume Aucher and François Schwarzentruber. 2013. On the complexity of dynamic epistemic logic. In Proceedings of the 14th Conference on Theoretical Aspects of Rationality and Knowledge.
303
+ Alexandru Baltag, Lawrence S. Moss, and Slawomir Solecki. 1998. The logic of public announcements, common knowledge, and private suspicions. In Proceedings of the 7th Conference on Theoretical Aspects of Rationality and Knowledge.
304
+ Simon Baron-Cohen. 1991. Precursors to a theory of mind: Understanding attention in others. Natural theories of mind: Evolution, development and simulation of everyday mindreading, 1(233-251):1.
305
+ Edward Beeching, Lewis Tunstall, and Sasha Rush. 2024. Scaling test-time compute with open models.
306
+ Thomas Bolander. 2014. Seeing is believing: Formalising false-belief tasks in dynamic epistemic logic. In European conference on social intelligence (ECSI 2014), pages 87-107.
307
+ Thomas Bolander and Mikkel Birkegaard Andersen. 2011. Epistemic planning for single-and multi-agent systems. Journal of Applied Non-Classical Logics, 21(1):9-34.
308
+ Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. 2024. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787.
309
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and 1 others. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
310
+
311
+ Daniel C. Dennett. 1978. Beliefs about beliefs. Behavioral and Brain Sciences, 1:568.
312
+ Zhuang Chen, Jincenzi Wu, Jinfeng Zhou, Bosi Wen, Guanqun Bi, Gongyao Jiang, Yaru Cao, Mengting Hu, Yunghwei Lai, Zexuan Xiong, and Minlie Huang. 2024. ToMBench: Benchmarking theory of mind in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15959-15983.
313
+ Daixuan Cheng, Shaohan Huang, Xuekai Zhu, Bo Dai, Wayne Xin Zhao, Zhenliang Zhang, and Furu Wei. 2025. Reasoning with exploration: An entropy perspective. arXiv preprint arXiv:2506.14758.
314
+ Gottlob Frege. 1879. Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought. From Frege to Gödel: A Source Book in Mathematical Logic, 1931:1-82.
315
+ Alvin I. Goldman. 1979. What is justified belief? Justification and Knowledge, 17:1-23.
316
+ Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
317
+ Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Peiyi Wang, Qihao Zhu, Runxin Xu, Ruoyu Zhang, Shirong Ma, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, and 175 others. 2025a. Deepseek-r1 incentivizes reasoning in llms through reinforcement learning. Nature, 645(8081):633-638.
318
+ Jiaxin Guo, Zewen Chi, Li Dong, Qingxiu Dong, Xun Wu, Shaohan Huang, and Furu Wei. 2025b. Reward reasoning model. arXiv preprint arXiv:2505.14674.
319
+ Lasse Dissing Hansen and Thomas Bolander. 2020. Implementing theory of mind on a robot using dynamic epistemic logic. In Twenty-Ninth International Joint Conference on Artificial Intelligence, pages 1615-1621. International Joint Conference on Artificial Intelligence Organization.
320
+ Jaakko Hintikka. 1962. Knowledge and belief: An introduction to the logic of the two notions. Studia Logica, 16:119-122.
321
+ Jaakko Hintikka and Merrill B. P. Hintikka. 1989. The Logic of Epistemology and the Epistemology of Logic: Selected Essays. Springer Verlag, Dordrecht, Nether-land.
322
+ Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276.
323
+
324
+ Donald E Knuth. 1964. Backus normal form vs. backus naur form. Communications of the ACM, 7(12):735-736.
325
+ Michal Kosinski. 2024. Evaluating large language models in theory of mind tasks. Proceedings of the National Academy of Sciences, 121(45):e2405460121.
326
+ Saul Kripke. 1963. Semantical considerations on modal logic. Acta Philosophica Fennica, 16:83-94.
327
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th symposium on operating systems principles, pages 611-626.
328
+ Dacheng Li, Shiyi Cao, Tyler Griggs, Shu Liu, Xiangxi Mo, Eric Tang, Sumanth Hegde, Kourosh Hakhamaneshi, Shishir G Patil, Matei Zaharia, Joseph E Gonzalez, and Ion Stoica. 2025. Llms can easily learn to reason from demonstrations structure, not content, is what matters! arXiv preprint arXiv:2502.07374.
329
+ Qingwen Lin, Boyan Xu, Zijian Li, Zhifeng Hao, Keli Zhang, and Ruichu Cai. 2025. Leveraging constrained monte carlo tree search to generate reliable long chain-of-thought for mathematical reasoning. arXiv preprint arXiv:2502.11169.
330
+ Zizheng Lin, Chunkit Chan, Yangqiu Song, and Xin Liu. 2024. Constrained reasoning chains for enhancing theory-of-mind in large language models. In Pacific Rim International Conference on Artificial Intelligence, pages 354-360. Springer.
331
+ Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, and 1 others. 2024. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437.
332
+ Yi-Long Lu, Chunhui Zhang, Jiajun Song, Lifeng Fan, and Wei Wang. 2025. Do theory of mind benchmarks need explicit human-like reasoning in language models? arXiv preprint arXiv:2504.01698.
333
+ Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. 2025. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393.
334
+ Jan Plaza. 2007. Logics of public communications. Synthese, 158:165-179.
335
+ David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences, 1(4):515-526.
336
+ Neil Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, SM Ali Eslami, and Matthew Botvinick. 2018. Machine theory of mind. In Proceedings of
337
+
338
+ the Thirty-Fifth International conference on machine learning, pages 4218-4227.
339
+ Bertrand Russell and Alfred North Whitehead. 1910. Principia Mathematica Vol. I. Cambridge University Press.
340
+ Jon Saad-Falcon, Buchanan E Kelly, Mayee F Chen, Tzu-Heng Huang, Brendan McLaughlin, Tanvir Bhathal, Shang Zhu, Ben Athiwaratkun, Frederic Sala, Scott Linderman, Azalia Mirhoseini, and Christopher Re. 2025. Shrinking the generation-verification gap with weak verifiers. arXiv preprint arXiv:2506.18203.
341
+ Rylan Schaeffer, Joshua Kazdan, John Hughes, Jordan Juravsky, Sara Price, Aengus Lynch, Erik Jones, Robert Kirk, Azalia Mirhoseini, and Sanmi Koyejo. 2025. How do large language monkeys get their power (laws)? In Proceedings of the Forty-second International Conference on Machine Learning.
342
+ Melanie Sclar, Jane Dwivedi-Yu, Maryam Fazel-Zarandi, Yulia Tsvetkov, Yonatan Bisk, Yejin Choi, and Asli Celikyilmaz. 2025. Explore theory of mind: program-guided adversarial data generation for the theory of mind reasoning. In The Thirteenth International Conference on Learning Representations.
343
+ Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, and 1 others. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300.
344
+ Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, and Avi ral Kumar. 2025. Scaling llm test-time compute optimally can be more effective than scaling parameters for reasoning. In The Thirteenth International Confer ence on Learning Representations.
345
+ James WA Strachan, Dalila Albergo, Giulia Borghini, Oriana Pansardi, Eugenio Scaliti, Saurabh Gupta, Krati Saxena, Alessandro Rufo, Stefano Panzeri, Guido Manzi, and 1 others. 2024. Testing theory of mind in large language models and humans. Nature Human Behaviour, 8(7):1285-1295.
346
+ Winnie Street, John Oliver Siy, Geoff Keeling, Adrien Baranes, Benjamin Barnett, Michael McKibben, Tatenda Kanyere, Alison Lentz, Robin IM Dunbar, and 1 others. 2024. Llms achieve adult human performance on higher-order theory of mind tasks. arXiv preprint arXiv:2405.18870.
347
+ Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. 2024. Easy-to-hard generalization: Scalable alignment beyond human supervision. Advances in Neural Information Processing Systems, 37:51118-51168.
348
+ Alfred Tarski. 1956. The concept of truth in formalized languages. In Logic, semantics, metamathematics, pages 152-278. Clarendon Press.
349
+
350
+ Tomer Ullman. 2023. Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399.
351
+ Johan Van Benthem. 2001. Games in dynamic-epistemic logic. Bulletin of Economic Research, 53(4):219-248.
352
+ Hans Van Ditmarsch, Wiebe van Der Hoek, and Barteld Kooi. 2007. Dynamic epistemic logic, volume 337. Springer Science & Business Media.
353
+ Evan Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshitabhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, and 1 others. 2025. 2 olmo 2 furious (colm's version). In Second Conference on Language Modeling.
354
+ Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. 2024. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9426-9439.
355
+ Weiqin Wang, Yile Wang, and Hui Huang. 2025. Ranked voting based self-consistency of large language models. arXiv preprint arXiv:2505.10772.
356
+ Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations.
357
+ Ludwig Wittgenstein. 1922. Tractatus logicophilosophicus. Filosoficky Casopis, 52:336-341.
358
+ Yufan Wu, Yinghui He, Yilin Jia, Rada Mihalcea, Yu-long Chen, and Naihao Deng. 2023. Hi-ToM: A benchmark for evaluating higher-order theory of mind reasoning in large language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 10691-10706, Singapore. Association for Computational Linguistics.
359
+ Yuheng Wu, Wentao Guo, Zirui Liu, Heng Ji, Zhaozhuo Xu, and Denghui Zhang. 2025. How large language models encode theory-of-mind: a study on sparse parameter patterns. npj Artificial Intelligence, 1(1):20.
360
+ An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, and 41 others. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388.
361
+ Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. 2024. Rest-mcts*: Llm self-training via process reward guided tree search. In Proceedings of the 38th International Conference on Neural Information Processing Systems, pages 64735-64772.
362
+
363
+ # Appendix
364
+
365
+ # A Prompt Templates
366
+
367
+ The full prompt templates are shown in Figures 5, 6, and 7.
368
+
369
+ # B What Makes PBM Different from Majority Voting?
370
+
371
+ Here, we provide a analysis to compare PBM with majority voting.
372
+
373
+ Problem Setup and Notation. Given an input $x$ , the language model $\pi_{\theta}$ must perform $K$ sequential belief-updating steps, generating a trajectory
374
+
375
+ $$
376
+ \left(z _ {1}, z _ {2}, \dots , z _ {K}\right), \quad z _ {i} \sim \pi_ {\theta} (\cdot \mid x, z _ {< i}).
377
+ $$
378
+
379
+ After the trajectory is complete, it outputs a final answer $y$ , chosen from a set of $L$ candidates (typically $L \approx 5 - 6$ in HiToM). We assume:
380
+
381
+ Each step is independently correct with probability $q$
382
+ - A trajectory is Good if all $K$ steps are correct: $\operatorname{Pr}[\text{Good}] = q^K$ .
383
+ - Otherwise, it is Bad: $\operatorname{Pr}[\operatorname{Bad}] = 1 - q^K$ .
384
+
385
+ Majority Voting. We sample $N$ i.i.d. trajectories and return the most frequent final answer. Let:
386
+
387
+ - $G \sim \operatorname{Binomial}(N, q^K)$ : number of Good trajectories.
388
+ - $R = N - G$ : number of Bad trajectories.
389
+
390
+ Under the uniform scattering assumption, Bad votes are evenly spread over $L - 1$ wrong answers:
391
+
392
+ $$
393
+ B _ {j} \mid R \sim \operatorname {B i n o m i a l} \left(R, \frac {1 - q ^ {K}}{L - 1}\right),
394
+ $$
395
+
396
+ $$
397
+ j = 1, \dots , L - 1.
398
+ $$
399
+
400
+ Majority voting succeeds iff
401
+
402
+ $$
403
+ E _ {\operatorname {m a j}} = \left\{G \geq 1 \text {a n d} G > \max _ {j} B _ {j} \right\}.
404
+ $$
405
+
406
+ PBM Reranking. We sample $N$ trajectories and assign each a stepwise score:
407
+
408
+ $$
409
+ s (z) = \frac {1}{K} \sum_ {k = 1} ^ {K} \mathbf {1} \{z _ {k} \text {c o r r e c t} \}.
410
+ $$
411
+
412
+ Good trajectories receive score 1, and Bad trajectories score at most $1 - \frac{1}{K}$ . We select the trajectory with the highest score. PBM thus succeeds iff
413
+
414
+ $$
415
+ E _ {\mathrm {p b m}} = \{G \geq 1 \}.
416
+ $$
417
+
418
+ PBM Success Rate. By independence:
419
+
420
+ $$
421
+ A _ {\mathrm {p b m}} = \Pr (E _ {\mathrm {p b m}}) = 1 - (1 - q ^ {K}) ^ {N}.
422
+ $$
423
+
424
+ Majority Voting Success Rate. We have
425
+
426
+ $$
427
+ \begin{array}{l} A _ {\mathrm {m a j}} = \Pr (E _ {\mathrm {m a j}}) \leq \Pr (G \geq 1) \\ = 1 - (1 - q ^ {K}) ^ {N} = A _ {\mathrm {p b m}}. \\ \end{array}
428
+ $$
429
+
430
+ Hence, PBM always outperforms majority voting. Remarks. For majority voting, divide both sides of $G > \max_{j} B_{j}$ by $N$ and take the limit as $N \to \infty$ . By the law of large numbers:
431
+
432
+ $$
433
+ \frac {G}{N} \rightarrow q ^ {K}, \quad \frac {\operatorname* {m a x} _ {j} B _ {j}}{N} \rightarrow \frac {1 - q ^ {K}}{L - 1}.
434
+ $$
435
+
436
+ A necessary condition for success is therefore:
437
+
438
+ $$
439
+ q ^ {K} > \frac {1 - q ^ {K}}{L - 1} \quad \Leftrightarrow \quad q ^ {K} > \frac {1}{L}.
440
+ $$
441
+
442
+ If $q^{K} \leq \frac{1}{L}$ (typical for small models or hard ToM question), then
443
+
444
+ $$
445
+ \lim_{N\to \infty}A_{\mathrm{maj}} = 0.
446
+ $$
447
+
448
+ In conclusion, majority voting is vulnerable to vote dilution: if $q^{K} \leq \frac{1}{L}$ , Bad trajectories cluster on wrong answers and can dominate.
449
+
450
+ This analysis explains why PBM offers more reliable inference than majority voting, especially in complex ToM settings.
451
+
452
+ # One-Shot Prompt - Part 1
453
+
454
+ Here is a story that unfolds in chronological order.
455
+
456
+ You will be asked a question about the story, which may involve either:
457
+
458
+ (1) Locating an object, or
459
+ (2) Inferring an agent's mental state (e.g., what A thinks B thinks C thinks).
460
+
461
+ To solve it, think step-by-step. At each step, repeat the current line from the story, then explain its effect on beliefs. Use [Null] if someone does not yet have knowledge. If a belief chain cannot be formed (e.g., some agent exited too early), freeze belief at the last available step.
462
+
463
+ <Note> {note}
464
+
465
+ In public or private communication:
466
+
467
+ - The speaker believes the listener will believe the claim.
468
+
469
+ - If the listener exited the room earlier than the speaker, they will believe it.
470
+
471
+ If the question is zero-order (e.g., "Where is X really?"), then in each step, only track the actual location of the object (e.g., "X is in [Y]"). You do not need to track nested beliefs.
472
+
473
+ Here is an example:
474
+
475
+ <Story>
476
+
477
+ 1 Amelia, Chloe, Liam, Owen and Benjamin entered the TV_room.
478
+ 2 The celery is in the red_envelope.
479
+ 3 Amelia made no movements and stayed in the TV_room for 1 minute.
480
+ 4 Chloe lost his watch.
481
+ 5 Amelia exited the TV_room.
482
+ 6 Chloe moved the celery to the green_bucket.
483
+ 7 Chloe exited the TV_room.
484
+ 8 Liam moved the celery to the red_bathtub.
485
+ 9 Liam exited the TV_room.
486
+ 10 Owen made no movements and stayed in the TV_room for 1 minute.
487
+ 11 Owen exited the TV_room.
488
+ 12 Benjamin made no movements and stayed in the TV_room for 1 minute.
489
+ 13 Benjamin exited the TV_room
490
+ 14 Amelia, Chloe, Liam, Owen and Benjamin entered the waiting_room.
491
+ 15 Liam publicly claimed that celery is in the white_bathtub now.
492
+ 16 Benjamin privately told Liam that the celery is in the blue_DRAW now.
493
+
494
+ <Question>
495
+
496
+ Where does Owen think Liam thinks Chloe thinks the celery is?
497
+
498
+ <Trace>
499
+
500
+ Step 1
501
+
502
+ Amelia, Chloe, Liam, Owen and Benjamin entered the TV_room.
503
+
504
+ Everyone is present, but the celery's location is still unknown.
505
+
506
+ Owen thinks Liam thinks Chloe thinks the celery is in [Null]
507
+
508
+ Step 2
509
+
510
+ The celery is in the red_envelope.
511
+
512
+ Everyone observes this.
513
+
514
+ Owen thinks Liam thinks Chloe thinks the celery is in [red_envelope]
515
+
516
+ Step 3
517
+
518
+ Amelia made no movements and stayed in the TV_room for 1 minute.
519
+
520
+ No effect.
521
+
522
+ Owen thinks Liam thinks Chloe thinks the celery is in [red_envelope]
523
+
524
+ Figure 5: One-Shot Prompt - Part 1.
525
+
526
+ # One-Shot Prompt - Part 2
527
+
528
+ Step 4
529
+
530
+ Chloe lost his watch.
531
+
532
+ Irrelevant.
533
+
534
+ Owen thinks Liam thinks Chloe thinks the celery is in [red_envelope]
535
+
536
+ Step 5
537
+
538
+ Amelia exited the TV_room.
539
+
540
+ Irrelevant.
541
+
542
+ Owen thinks Liam thinks Chloe thinks the celery is in [red_envelope]
543
+
544
+ Step 6
545
+
546
+ Chloe moved the celery to the green_bucket.
547
+
548
+ Only Chloe, Liam, Owen, Benjamin are present. They all see this move.
549
+
550
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
551
+
552
+ Step 7
553
+
554
+ Chloe exited the TV_room.
555
+
556
+ Chloe's belief frozen; still [green_bucket]
557
+
558
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
559
+
560
+ Step 8
561
+
562
+ Liam moved the celery to the red_bathtub.
563
+
564
+ Only Liam, Owen, Benjamin present. They observe the move. Chloe not present, so her belief unchanged.
565
+
566
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
567
+
568
+ Step 9
569
+
570
+ Liam exited the TV_room.
571
+
572
+ No change.
573
+
574
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
575
+
576
+ Step 10
577
+
578
+ Owen made no movements and stayed in the TV_room for 1 minute.
579
+
580
+ Irrelevant.
581
+
582
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
583
+
584
+ Step 11
585
+
586
+ Owen exited the TV_room.
587
+
588
+ Owen's belief frozen.
589
+
590
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
591
+
592
+ Step 12
593
+
594
+ Benjamin made no movements and stayed in the TV_room for 1 minute.
595
+
596
+ Irrelevant.
597
+
598
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
599
+
600
+ Step 13
601
+
602
+ Benjamin exited the TV_room.
603
+
604
+ No change
605
+
606
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
607
+
608
+ Step 14
609
+
610
+ Everyone entered the waiting_room.
611
+
612
+ No effect on beliefs.
613
+
614
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
615
+
616
+ Figure 6: One-Shot Prompt - Part 2.
617
+
618
+ # One-Shot Prompt - Part 3
619
+
620
+ # Step 15
621
+
622
+ ```txt
623
+ Liam publicly claimed that celery is in the white_bathtub now. Owen hears this statement. However, public speech only affects first- and second-order beliefs (e.g., what Liam believes, what Owen thinks Liam believes, and what Liam thinks Owen believes). It does not change Owen's belief about what Liam thinks Chloe thinks. Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
624
+ ```
625
+
626
+ # Step 16
627
+
628
+ ```txt
629
+ Benjamin privately told Liam that the celery is in the blue_DRAW now. Owen does not hear this, but more importantly, private communication only affects beliefs between the speaker and the listener. It can change what Liam believes (based on exit order), or what Liam thinks Benjamin believes (based on exit order), or what Benjamin thinks Liam believes (always change) - but it cannot affect higher-order beliefs. So this does not change Owen's belief about what Liam thinks Chloe thinks.
630
+ ```
631
+
632
+ ```txt
633
+ Owen thinks Liam thinks Chloe thinks the celery is in [green_bucket]
634
+ ```
635
+
636
+ ```txt
637
+ Final Answer: [green_bucket]
638
+ Now it's your turn.
639
+ <Story> {story}
640
+ <Question> {question}
641
+ Give a step-by-step trace as in the example. Then, give the final answer in one line like: Final Answer: [your choice]
642
+ <trace>
643
+ ```
644
+
645
+ Figure 7: One-Shot Prompt - Part 3.
2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f6b9da87110dab03a03e0ef34010fa7ae84359c553ca7acf2445e3c18838cc6
3
+ size 565560
2025/DEL-ToM_ Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DELOC_ Document Element Localizer/b1ff4230-9685-4b29-9a1f-525a8461a4fb_content_list.json ADDED
@@ -0,0 +1,1804 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "DELOC: Document Element Localizer",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 295,
8
+ 90,
9
+ 702,
10
+ 109
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Hammad Ayyubi $^{1*}$ , Puneet Mathur $^{2}$ , Md Mehrab Tanjim $^{2}$ , Vlad I Morariu $^{2}$",
17
+ "text_level": 1,
18
+ "bbox": [
19
+ 151,
20
+ 142,
21
+ 850,
22
+ 161
23
+ ],
24
+ "page_idx": 0
25
+ },
26
+ {
27
+ "type": "text",
28
+ "text": "$^{1}$ Columbia University, $^{2}$ Adobe Research,",
29
+ "bbox": [
30
+ 324,
31
+ 162,
32
+ 673,
33
+ 181
34
+ ],
35
+ "page_idx": 0
36
+ },
37
+ {
38
+ "type": "text",
39
+ "text": "Correspondence: hayyubi@cs.columbia.edu",
40
+ "bbox": [
41
+ 357,
42
+ 183,
43
+ 638,
44
+ 197
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "Abstract",
51
+ "text_level": 1,
52
+ "bbox": [
53
+ 260,
54
+ 260,
55
+ 341,
56
+ 275
57
+ ],
58
+ "page_idx": 0
59
+ },
60
+ {
61
+ "type": "text",
62
+ "text": "Editing documents and PDFs using natural language instructions is desirable for many reasons - ease of use, increasing accessibility to non-technical users, and for creativity. To do this automatically, a system needs to first understand the user's intent and convert this to an executable plan or command, and then the system needs to identify or localize the elements that the user desires to edit. While there exist methods that can accomplish these tasks, a major bottleneck in these systems is the inability to ground the spatial edit location effectively. We address this gap through our proposed system, DELOC (Document Element LOCALizer). DELOC adapts the grounding capabilities of existing Multimodal Large Language Model (MLLM) from natural images to PDFs. This adaptation involves two novel contributions: 1) synthetically generating PDF-grounding instruction tuning data from partially annotated datasets; and 2) synthetic data cleaning via Code-NLI, an NLI-inspired process to clean data using generated Python code. The effectiveness of DELOC is apparent in the $>2\\mathrm{x}$ zero-shot improvement it achieves over the next best MLLM, GPT-4o.",
63
+ "bbox": [
64
+ 141,
65
+ 288,
66
+ 460,
67
+ 657
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "1 Introduction",
74
+ "text_level": 1,
75
+ "bbox": [
76
+ 114,
77
+ 670,
78
+ 260,
79
+ 684
80
+ ],
81
+ "page_idx": 0
82
+ },
83
+ {
84
+ "type": "text",
85
+ "text": "Editing documents and PDFs via natural language (Mathur et al., 2023; Suri et al., 2024) is an innovative and user-friendly advancement. This technology makes it easier for non-technical people to edit PDFs, speeds up the editing process, and facilitates document editing on a mobile device.",
86
+ "bbox": [
87
+ 112,
88
+ 695,
89
+ 487,
90
+ 790
91
+ ],
92
+ "page_idx": 0
93
+ },
94
+ {
95
+ "type": "text",
96
+ "text": "Typically, a two-stage process has been followed to edit PDFs from natural language requests. In the first stage, a model is required to predict the edit location (green highlight in Figure 1) spatially in the PDF. The edit location is a PDF element (paragraph, line, list, table, etc.). The second stage",
97
+ "bbox": [
98
+ 112,
99
+ 791,
100
+ 487,
101
+ 888
102
+ ],
103
+ "page_idx": 0
104
+ },
105
+ {
106
+ "type": "image",
107
+ "img_path": "images/3471bc165d1c26d7258906fbdfb45df050adfad0e5090df4880608a459c0b1e1.jpg",
108
+ "image_caption": [
109
+ "Figure 1: Illustration of a typical PDF edit process. Models predict Edit Location and Edit Command, which are combined to effect the required edit. This work focuses on improving the bottleneck step - Edit Location. To this end, the proposed system, DELOC, generates large-scale (3M) synthetic data to instruction tune a Multimodal LLM."
110
+ ],
111
+ "image_footnote": [],
112
+ "bbox": [
113
+ 515,
114
+ 260,
115
+ 880,
116
+ 449
117
+ ],
118
+ "page_idx": 0
119
+ },
120
+ {
121
+ "type": "text",
122
+ "text": "involves predicting the edit command (replace in Figure 1). The predicted bounding box and edit command can then be simply combined to effect the desired change. Existing PDF editing systems achieve high accuracy $(>85\\%)$ in predicting edit commands (Mathur et al., 2023) but struggle with bounding box precision $(<50\\%)$ (Suri et al., 2024). As such, we focus on improving the spatial PDF element localization for a given edit request.",
123
+ "bbox": [
124
+ 507,
125
+ 574,
126
+ 882,
127
+ 719
128
+ ],
129
+ "page_idx": 0
130
+ },
131
+ {
132
+ "type": "text",
133
+ "text": "State-of-the-art systems ground user queries in images (Wang et al., 2023; You et al., 2023), mobile UIs (You et al., 2024), and web UIs (Hong et al., 2023), but they do not transfer directly to PDFs. PDFs are more verbose, and edit requests follow a hierarchical structure requiring domain knowledge. For instance, a request like \"first list item of the third list in Section A.1\" demands an understanding of the composition: section $\\rightarrow$ list $\\rightarrow$ list item.",
134
+ "bbox": [
135
+ 507,
136
+ 724,
137
+ 882,
138
+ 868
139
+ ],
140
+ "page_idx": 0
141
+ },
142
+ {
143
+ "type": "text",
144
+ "text": "To address these challenges, we propose DELOC (Document Element LOCALizer), which adapts the strong grounding capabilities of Mul",
145
+ "bbox": [
146
+ 507,
147
+ 873,
148
+ 882,
149
+ 921
150
+ ],
151
+ "page_idx": 0
152
+ },
153
+ {
154
+ "type": "page_footnote",
155
+ "text": "*Work done during an internship at Adobe Research, Document Intelligence Lab (DIL)",
156
+ "bbox": [
157
+ 112,
158
+ 894,
159
+ 487,
160
+ 921
161
+ ],
162
+ "page_idx": 0
163
+ },
164
+ {
165
+ "type": "page_number",
166
+ "text": "31127",
167
+ "bbox": [
168
+ 473,
169
+ 927,
170
+ 524,
171
+ 940
172
+ ],
173
+ "page_idx": 0
174
+ },
175
+ {
176
+ "type": "footer",
177
+ "text": "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 31127-31136 November 4-9, 2025 ©2025 Association for Computational Linguistics",
178
+ "bbox": [
179
+ 152,
180
+ 945,
181
+ 843,
182
+ 972
183
+ ],
184
+ "page_idx": 0
185
+ },
186
+ {
187
+ "type": "image",
188
+ "img_path": "images/877ec0cf624bc8fc7f88120d472ecff5c21fda422b6ca29317517bcb9a1faebb.jpg",
189
+ "image_caption": [
190
+ "Figure 2: Overview of DELOC. Document elements, their bounding box, and their hierarchical relations are first detected (DET) and formatted into a JSON structure (FJ). This JSON is fed to an LLM to generate synthetic data."
191
+ ],
192
+ "image_footnote": [],
193
+ "bbox": [
194
+ 139,
195
+ 83,
196
+ 860,
197
+ 259
198
+ ],
199
+ "page_idx": 1
200
+ },
201
+ {
202
+ "type": "text",
203
+ "text": "timodal Large Language Models (MLLMs) from natural images to PDFs. This adaptation involves instruction-tuning an MLLM specifically for PDFs. Our key contribution is the synthetic generation of high-quality instruction-tuning data, followed by an automated cleaning process called Code-NLI.",
204
+ "bbox": [
205
+ 112,
206
+ 310,
207
+ 487,
208
+ 406
209
+ ],
210
+ "page_idx": 1
211
+ },
212
+ {
213
+ "type": "text",
214
+ "text": "We first represent PDF elements (paragraphs, lines, lists, tables, titles, sections) in a spatially aware hierarchical JSON format (Figure 2), capturing each element's bounding box and hierarchical relationships (e.g., paragraph $\\rightarrow$ lines, list $\\rightarrow$ list items). This information is sourced from partially annotated PDF Document Analysis datasets (e.g., PubLayNet (Zhong et al., 2019)), using existing annotations and heuristically generating missing ones. Next, we input this structured PDF representation into a Large Language Model (LLM) to generate synthetic user requests for PDF editing, along with system responses specifying edit locations via bounding boxes. The hierarchical structure enables the LLM to produce compositional requests that resemble real user queries, while the bounding box data ensures precise edit location predictions.",
215
+ "bbox": [
216
+ 112,
217
+ 412,
218
+ 489,
219
+ 684
220
+ ],
221
+ "page_idx": 1
222
+ },
223
+ {
224
+ "type": "text",
225
+ "text": "We further clean noisy generated samples using a proposed process called Code-NLI (Code-based Natural Language Inference). Code-NLI (Figure 3) treats the generated user edit request and system response as the hypothesis, and the PDF as the premise. To verify the hypothesis, it leverages an LLM to generate Python code, which is then executed to filter out noisy samples.",
226
+ "bbox": [
227
+ 112,
228
+ 690,
229
+ 487,
230
+ 819
231
+ ],
232
+ "page_idx": 1
233
+ },
234
+ {
235
+ "type": "text",
236
+ "text": "To demonstrate the effectiveness of DELOC, we evaluate it on DocEdit (Mathur et al., 2023). Our results demonstrate that DELOC outperforms all existing zero-shot models on PDFs, including proprietary MLLM GPT-4o by $>2\\mathrm{x}$ . Our ablations demonstrate DELOC's performance improves with",
237
+ "bbox": [
238
+ 112,
239
+ 825,
240
+ 489,
241
+ 921
242
+ ],
243
+ "page_idx": 1
244
+ },
245
+ {
246
+ "type": "text",
247
+ "text": "data scale and data cleaning via Code-NLI is crucial to its performance.",
248
+ "bbox": [
249
+ 507,
250
+ 310,
251
+ 884,
252
+ 342
253
+ ],
254
+ "page_idx": 1
255
+ },
256
+ {
257
+ "type": "text",
258
+ "text": "2 Related Work",
259
+ "text_level": 1,
260
+ "bbox": [
261
+ 507,
262
+ 355,
263
+ 665,
264
+ 370
265
+ ],
266
+ "page_idx": 1
267
+ },
268
+ {
269
+ "type": "text",
270
+ "text": "There has been a surge of interest in natural language based PDF edit localization. DocEdit (Mathur et al., 2023) trains a multimodal transformer that directly regresses the edit bounding box given the document as input. DocEdit-v2 (Suri et al., 2024) trains an encoder-decoder architecture that outputs a segmentation mask for the edit location. In training a model from scratch, these methods fail to utilize the excellent grounding capabilities of recent MLLMs.",
271
+ "bbox": [
272
+ 505,
273
+ 382,
274
+ 884,
275
+ 542
276
+ ],
277
+ "page_idx": 1
278
+ },
279
+ {
280
+ "type": "text",
281
+ "text": "A number of MLLMs (Chen et al., 2023; Zhang et al., 2023; Yuan et al., 2023; Lv et al., 2023; Lai et al., 2024; Ma et al., 2024) have been proposed that have shown impressive grounding abilities on natural images. Works like Ferret-UI (You et al., 2024), Cog-Agent (Hong et al., 2023), and MultiUI (Liu et al., 2024) have successfully adapted these MLLMs for mobile screens and web pages. We take inspiration from these works to adapt MLLMs for PDFs using large-scale synthetic data. In comparison, Agent-DocEdit (Wu et al., 2024) finetunes a grounding model, GLIP (Li* et al., 2022), only on a small training set.",
282
+ "bbox": [
283
+ 507,
284
+ 543,
285
+ 882,
286
+ 752
287
+ ],
288
+ "page_idx": 1
289
+ },
290
+ {
291
+ "type": "text",
292
+ "text": "3 Method",
293
+ "text_level": 1,
294
+ "bbox": [
295
+ 507,
296
+ 766,
297
+ 611,
298
+ 780
299
+ ],
300
+ "page_idx": 1
301
+ },
302
+ {
303
+ "type": "text",
304
+ "text": "Our method adapts the excellent query understanding and grounding capabilities of existing Multimodal Large Language Models (MLLMs) for grounding edit queries in PDFs. To this end, we: 1) generate diverse, compositional, and clean instruction-tuning data; and, 2) instruction-tune a MLLM on the generated data. We describe the synthetic data generation process below.",
305
+ "bbox": [
306
+ 507,
307
+ 791,
308
+ 884,
309
+ 921
310
+ ],
311
+ "page_idx": 1
312
+ },
313
+ {
314
+ "type": "page_number",
315
+ "text": "31128",
316
+ "bbox": [
317
+ 475,
318
+ 927,
319
+ 524,
320
+ 940
321
+ ],
322
+ "page_idx": 1
323
+ },
324
+ {
325
+ "type": "image",
326
+ "img_path": "images/ace1861ad173926c3c69e306ef5ebfeb2ba98dfbbb5ca3142b37385db11a6cf0.jpg",
327
+ "image_caption": [
328
+ "Figure 3: Code-NLI based data cleaning. Generated data is treated as hypothesis, which is validated by LLM generated Python code, given PDF premise."
329
+ ],
330
+ "image_footnote": [],
331
+ "bbox": [
332
+ 119,
333
+ 87,
334
+ 305,
335
+ 250
336
+ ],
337
+ "page_idx": 2
338
+ },
339
+ {
340
+ "type": "image",
341
+ "img_path": "images/be6a08206f968efc123b8bfa78b055b3bcd2634e1051cb93482a7cd6ed97be57.jpg",
342
+ "image_caption": [
343
+ "Generated Code"
344
+ ],
345
+ "image_footnote": [],
346
+ "bbox": [
347
+ 310,
348
+ 87,
349
+ 468,
350
+ 239
351
+ ],
352
+ "page_idx": 2
353
+ },
354
+ {
355
+ "type": "text",
356
+ "text": "Given a lack of large-scale instruction-tuning data for grounding edit queries in PDFs, we build an automatic synthetic data generation process. We discuss below the two stages of this process:",
357
+ "bbox": [
358
+ 112,
359
+ 319,
360
+ 485,
361
+ 382
362
+ ],
363
+ "page_idx": 2
364
+ },
365
+ {
366
+ "type": "text",
367
+ "text": "LLM powered data generation. We present the PDF to an LLM and prompt it to generate edit requests along with the corresponding bounding box (see Figure 2). Prompt details in Appendix B. To ensure that the generated edit requests capture the compositional nature of real user queries, we structure PDFs in a hierarchical JSON format that explicitly defines the parent-child relationships of each element. Each element contains information about its type (e.g., paragraph, line, list), its bounding box within the PDF, and its subelements (e.g., a line is a subelement of a paragraph). Additionally, we organize elements sequentially within the same hierarchy based on their y-coordinates, enabling the LLM to generate edit requests that reference these elements numerically (e.g., \"second list,\" \"last line\"). Moreover, the inclusion of a bounding box for each element allows the LLM to generate precise edit location in the system response for the corresponding edit query.",
368
+ "bbox": [
369
+ 112,
370
+ 387,
371
+ 487,
372
+ 708
373
+ ],
374
+ "page_idx": 2
375
+ },
376
+ {
377
+ "type": "text",
378
+ "text": "To obtain this hierarchical PDF representation, we require annotations for each PDF element, including its type, bounding box, and hierarchical subelements. However, there does not exist a single dataset that contains all this information for all element types. As such, we use different datasets - PubLayNet and DocLayNet (Pfitzmann et al., 2022) - each containing a subset of these elements. Table 1 shows a non-exhaustive list of these elements and the corresponding dataset containing them. If an element's hierarchical subelements' annotations do not exist natively in the dataset, we use heuristics to create them. These heuristics are rela",
379
+ "bbox": [
380
+ 110,
381
+ 712,
382
+ 489,
383
+ 919
384
+ ],
385
+ "page_idx": 2
386
+ },
387
+ {
388
+ "type": "table",
389
+ "img_path": "images/b6e7adcf3577c9029b3bd601ae99b0fd15831e89da74663a6c72aabc9558d28d.jpg",
390
+ "table_caption": [],
391
+ "table_footnote": [],
392
+ "table_body": "<table><tr><td></td><td>Title</td><td colspan=\"2\">Paragraph</td><td colspan=\"3\">List</td><td>Tables</td><td>Figures</td></tr><tr><td>Datasets</td><td></td><td></td><td>Lines</td><td></td><td colspan=\"2\">List-items</td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td>Lines</td><td></td><td></td></tr><tr><td>PubLayNet</td><td>✓</td><td>✓</td><td>♠</td><td>✓</td><td>♠</td><td>♠</td><td>✓</td><td>✓</td></tr><tr><td>DocLayNet</td><td>✓</td><td>✓</td><td>✓</td><td>♠</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>",
393
+ "bbox": [
394
+ 512,
395
+ 82,
396
+ 878,
397
+ 155
398
+ ],
399
+ "page_idx": 2
400
+ },
401
+ {
402
+ "type": "text",
403
+ "text": "Table 1: Datasets and their PDF elements - either natively (✓) present or heuristically created (♠). Multiple datasets combine to give extensive elements coverage.",
404
+ "bbox": [
405
+ 507,
406
+ 159,
407
+ 882,
408
+ 202
409
+ ],
410
+ "page_idx": 2
411
+ },
412
+ {
413
+ "type": "text",
414
+ "text": "tively simple - comparing y-coordinates of words to create lines within paragraphs, comparing x-coordinates of lines to create list-items within lists, and so on. The granularity scope goes as low as words. This means we have annotations for words, equations, formulae, footnotes, page numbers, etc. Essentially, we tried to cover as many PDF elements as possible with the available datasets and heuristics. More details in Appendix A.",
415
+ "bbox": [
416
+ 507,
417
+ 218,
418
+ 882,
419
+ 363
420
+ ],
421
+ "page_idx": 2
422
+ },
423
+ {
424
+ "type": "text",
425
+ "text": "Data cleaning with Code-NLI. Since the LLM generates data automatically, it can be noisy—for instance, an edit request might reference the second list item while the bounding box corresponds to the third. To filter out such inconsistencies, we draw inspiration from Natural Language Inference (NLI) (Bowman et al., 2015), which predicts whether a hypothesis aligns with a given premise. Here, the premise is the PDF, and the hypothesis is the generated user-system conversation.",
426
+ "bbox": [
427
+ 507,
428
+ 363,
429
+ 882,
430
+ 524
431
+ ],
432
+ "page_idx": 2
433
+ },
434
+ {
435
+ "type": "text",
436
+ "text": "To verify these conversations, we generate substatements that must hold true. For example, to confirm that a bounding box corresponds to the second list item, a counting statement must evaluate its position as 2 (Figure 3). These sub-statements are generated by an LLM as Pythonic function calls, which are then executed using our implemented APIs for verification. Pythonic statements make verification algorithmic and automatic. API details and prompts in Appendix C.",
437
+ "bbox": [
438
+ 507,
439
+ 526,
440
+ 882,
441
+ 686
442
+ ],
443
+ "page_idx": 2
444
+ },
445
+ {
446
+ "type": "text",
447
+ "text": "Once we have clean synthetic instruction-tuning data, we finetune a MLLM. The input is an image of the PDF and a user-edit request. The output is a bounding box localizing the edit element.",
448
+ "bbox": [
449
+ 507,
450
+ 687,
451
+ 880,
452
+ 751
453
+ ],
454
+ "page_idx": 2
455
+ },
456
+ {
457
+ "type": "text",
458
+ "text": "4 Experiments",
459
+ "text_level": 1,
460
+ "bbox": [
461
+ 507,
462
+ 765,
463
+ 653,
464
+ 782
465
+ ],
466
+ "page_idx": 2
467
+ },
468
+ {
469
+ "type": "text",
470
+ "text": "To balance quality and the cost of long context, we use LLaMA-3.1 70B (Grattafori and Team, 2024) for generating synthetic data. It is run on 40 Nvidia A100-80 GiB GPUs for 2 days. Next, GPT-4o (OpenAI, 2024) is used in Code-NLI cleaning as small context length allows us to optimize for quality. This results in $\\sim 3\\mathrm{M}$ samples for instruction tuning. We keep $95\\%$ samples for training and the rest",
471
+ "bbox": [
472
+ 507,
473
+ 791,
474
+ 882,
475
+ 921
476
+ ],
477
+ "page_idx": 2
478
+ },
479
+ {
480
+ "type": "page_number",
481
+ "text": "31129",
482
+ "bbox": [
483
+ 475,
484
+ 927,
485
+ 524,
486
+ 940
487
+ ],
488
+ "page_idx": 2
489
+ },
490
+ {
491
+ "type": "table",
492
+ "img_path": "images/78035f76c3c0889048a9644ed344a3f10fd2a246da4a03660acc3e10ed03a3a3.jpg",
493
+ "table_caption": [],
494
+ "table_footnote": [],
495
+ "table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"3\">PDF</td><td colspan=\"3\">Design</td></tr><tr><td>A@0.5</td><td>A@0.30</td><td>A@0.25</td><td>A@0.5</td><td>A@0.30</td><td>A@0.25</td></tr><tr><td colspan=\"7\">Zero-Shot</td></tr><tr><td>GPT-4o</td><td>5.624</td><td>14.258</td><td>17.885</td><td>28.961</td><td>46.766</td><td>52.582</td></tr><tr><td>CogAgent</td><td>4.16</td><td>9.42</td><td>11.13</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Qwen2.5-VL</td><td>2.90</td><td>6.97</td><td>8.72</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Ferret-UI</td><td>0.0</td><td>0.22</td><td>0.28</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Phi-3-V</td><td>0.450</td><td>2.727</td><td>4.189</td><td>10.423</td><td>25.264</td><td>31.316</td></tr><tr><td>DELOC</td><td>14.703</td><td>30.391</td><td>35.198</td><td>32.421</td><td>51.489</td><td>57.445</td></tr><tr><td colspan=\"7\">Finetuned</td></tr><tr><td>DocEdit</td><td>36.500</td><td>-</td><td>-</td><td>34.340</td><td>-</td><td>-</td></tr><tr><td>DocEdit-v2</td><td>48.690</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Phi-3-V</td><td>26.567</td><td>48.524</td><td>54.371</td><td>34.523</td><td>57.183</td><td>62.272</td></tr><tr><td>DELOC</td><td>49.620</td><td>67.023</td><td>69.975</td><td>57.012</td><td>71.566</td><td>74.976</td></tr></table>",
496
+ "bbox": [
497
+ 117,
498
+ 82,
499
+ 485,
500
+ 250
501
+ ],
502
+ "page_idx": 3
503
+ },
504
+ {
505
+ "type": "table",
506
+ "img_path": "images/f641877bf18a122184a0e00ce187125fa3ab840f24ca1622018db45c10563752.jpg",
507
+ "table_caption": [
508
+ "Table 2: Comparison of DELOC with state-of-the-art on DocEdit edit location (bbox) prediction. DELOC outperforms both existing Multimodal LLMs in zeroshot setting and specialized models in finetuned setting. $A@X$ denotes Accuracy at IoU of $X$ ."
509
+ ],
510
+ "table_footnote": [],
511
+ "table_body": "<table><tr><td>Model</td><td>A@0.5</td></tr><tr><td>DELOC</td><td>6.297</td></tr><tr><td>- w/o CODE-NLI Filtering</td><td>5.848</td></tr></table>",
512
+ "bbox": [
513
+ 191,
514
+ 335,
515
+ 411,
516
+ 382
517
+ ],
518
+ "page_idx": 3
519
+ },
520
+ {
521
+ "type": "text",
522
+ "text": "Table 3: Ablation of Code-NLI. The filtering step improves DELOC performance.",
523
+ "bbox": [
524
+ 112,
525
+ 387,
526
+ 489,
527
+ 416
528
+ ],
529
+ "page_idx": 3
530
+ },
531
+ {
532
+ "type": "text",
533
+ "text": "for validation. More data statistics in Appendix D.",
534
+ "bbox": [
535
+ 112,
536
+ 432,
537
+ 487,
538
+ 450
539
+ ],
540
+ "page_idx": 3
541
+ },
542
+ {
543
+ "type": "text",
544
+ "text": "For instruction-tuning Phi-3V (Abdin and Team, 2024) is used as it's small and thus easy to train. We use a learning rate of 5e-5, weight decay of 0.12, AdamW optimizer, and a linear scheduler with a warmup. It takes approximately 10 hours to train the model on 128 Nvidia A100-80 GiB GPUs. More architecture and training details are in Appendix E and Appendix F.",
545
+ "bbox": [
546
+ 112,
547
+ 451,
548
+ 487,
549
+ 580
550
+ ],
551
+ "page_idx": 3
552
+ },
553
+ {
554
+ "type": "text",
555
+ "text": "We evaluate our proposed approach on the test set of the DocEdit dataset (Mathur et al., 2023). The test is split into two subsets: PDFs, which are more verbose, and Designs, which are graphical. We use the originally proposed metric of calculating accuracy by evaluating if the Intersection Over Union (IoU) between predicted bbox and groundtruth bbox is greater than a threshold. For baselines, we select state-of-the-art MLLMs that understand text well from images. Consequently, we select GPT-4o, Phi3-V, CogAgent, Ferret-UI, Qwen2.5-VL (Bai et al., 2025) and Fuyu (Bavishi et al., 2023). We also consider as baselines specialized models - DocEdit v1 and v2 - that are finetuned on DocEdit. We compare DELOC against these baselines in two settings: zero-shot - where DELOC is not finetuned on the DocEdit train set, and finetuned - where it is.",
556
+ "bbox": [
557
+ 112,
558
+ 581,
559
+ 489,
560
+ 869
561
+ ],
562
+ "page_idx": 3
563
+ },
564
+ {
565
+ "type": "text",
566
+ "text": "Our main results are summarized in Table 2. We make the following observations: 1) DELOC outperforms both open-source and proprietary closed",
567
+ "bbox": [
568
+ 112,
569
+ 873,
570
+ 489,
571
+ 921
572
+ ],
573
+ "page_idx": 3
574
+ },
575
+ {
576
+ "type": "image",
577
+ "img_path": "images/346a7c830ca3cd72211ea5d8dd72aab6483a319921ba7abd1e077baed3bca2b5.jpg",
578
+ "image_caption": [
579
+ "Figure 4: Datasize Ablation: DELOC's performance improves with data."
580
+ ],
581
+ "image_footnote": [],
582
+ "bbox": [
583
+ 564,
584
+ 87,
585
+ 831,
586
+ 206
587
+ ],
588
+ "page_idx": 3
589
+ },
590
+ {
591
+ "type": "image",
592
+ "img_path": "images/049356e43ffae248e45e761dc8d937f584b751c1303a1426d4329383dc78a6d1.jpg",
593
+ "image_caption": [
594
+ "Figure 5: Qualitative sample prediction from DELOC."
595
+ ],
596
+ "image_footnote": [],
597
+ "bbox": [
598
+ 522,
599
+ 256,
600
+ 878,
601
+ 458
602
+ ],
603
+ "page_idx": 3
604
+ },
605
+ {
606
+ "type": "text",
607
+ "text": "source MLLM by $\\sim 3\\mathrm{x}$ on the PDF subset and $\\sim 12\\%$ on the Design subset. 2) DELOC outperforms all specialized finetuned models on both subsets. 3) DELOC improves over its backbone MLLM, Phi-3-V by $28\\mathrm{x}$ on PDFs and $3\\mathrm{x}$ on Designs, underscoring the significance of our synthetic instruction tuning. 4) The best existing zero-shot MLLM performance is at best $5\\%$ , indicating grounding in PDF is challenging. 5) All MLLM perform much better on the Design subset. This is understandable given all MLLM have been pretrained on natural images, which are closer in domain to Designs than to verbose PDFs. Notably, we found Fuyu to be quite bad; hence we do not include it in Table 2. We also don't compare against Agent-DocEdit as it includes an additional post-processing step that assumes availability of ground-truthbbox for all elements in a PDF.",
608
+ "bbox": [
609
+ 507,
610
+ 498,
611
+ 882,
612
+ 785
613
+ ],
614
+ "page_idx": 3
615
+ },
616
+ {
617
+ "type": "text",
618
+ "text": "Our ablation on Code-NLI in Table 3 indicates the importance of data cleaning to our approach. We also manually evaluate the quality of Code-NLI on 25 samples ourselves. Our findings are: 1) Precision is $84\\%$ : This implies that most of the data that Code-NLI predicts as correct is actually correct; only $16\\%$ incorrect data passes through the Code-NLI filter. 2) Recall is $63.64\\%$ : This",
619
+ "bbox": [
620
+ 507,
621
+ 791,
622
+ 882,
623
+ 919
624
+ ],
625
+ "page_idx": 3
626
+ },
627
+ {
628
+ "type": "page_number",
629
+ "text": "31130",
630
+ "bbox": [
631
+ 475,
632
+ 927,
633
+ 524,
634
+ 940
635
+ ],
636
+ "page_idx": 3
637
+ },
638
+ {
639
+ "type": "text",
640
+ "text": "implies it rejects a decent amount of good data $(36\\%)$ . This loss is acceptable for our use case since we generate a very large amount of synthetic data. The important consideration for us is that incorrect data should not be flagged as correct – which Code-NLI does (84% precision).",
641
+ "bbox": [
642
+ 112,
643
+ 84,
644
+ 487,
645
+ 180
646
+ ],
647
+ "page_idx": 4
648
+ },
649
+ {
650
+ "type": "text",
651
+ "text": "Figure 4 demonstrates that performance of DELOC improves with data size. We also provide a qualitative example in Figure 5 that shows 1) DELOC acquires effective capability of verbose PDF grounding. 2) Ambiguity in the expected response can lead to divergence from ground-truth. More examples in Appendix G.",
652
+ "bbox": [
653
+ 112,
654
+ 181,
655
+ 489,
656
+ 294
657
+ ],
658
+ "page_idx": 4
659
+ },
660
+ {
661
+ "type": "text",
662
+ "text": "5 Conclusion",
663
+ "text_level": 1,
664
+ "bbox": [
665
+ 112,
666
+ 304,
667
+ 247,
668
+ 319
669
+ ],
670
+ "page_idx": 4
671
+ },
672
+ {
673
+ "type": "text",
674
+ "text": "In this work, we propose DELOC, a system to ground PDF edit request spatially. DELOC leverages spatial and hierarchical structure of PDFs to generate 3M synthetic instruction tuning data that successfully adapts existing MLLM for PDF grounding. Our results demonstrate that DELOC outperforms both existing MLLMs, including GPT-40, and specialized models on DocEdit.",
675
+ "bbox": [
676
+ 112,
677
+ 328,
678
+ 489,
679
+ 456
680
+ ],
681
+ "page_idx": 4
682
+ },
683
+ {
684
+ "type": "text",
685
+ "text": "Limitations",
686
+ "text_level": 1,
687
+ "bbox": [
688
+ 112,
689
+ 468,
690
+ 220,
691
+ 482
692
+ ],
693
+ "page_idx": 4
694
+ },
695
+ {
696
+ "type": "text",
697
+ "text": "We attempted to cover a comprehensive list of PDF elements in our synthetically generated data. This coverage is mostly dependent on the element annotation present in PDF analysis datasets we began with to create our hierarchical representation. As such, there could be some PDF elements which are not covered. We leave for future work the addition of more PDF analysis datasets to increase element coverage. Furthermore, this method (like most grounding MLLM approaches) generates bounding boxes using auto-regressive prediction. The loss used assigns equal weight to all predictions not exactly same as ground-truthbbox, irrespective of their proximity to the ground-truthbbox. Ideally, the loss should be higher for a predictedbbox that is farther from the ground-truth than a predicted box that is closer to ground-truth. Reinforcement learning based preference optimization could be a way to address this issue in future works.",
698
+ "bbox": [
699
+ 112,
700
+ 492,
701
+ 489,
702
+ 797
703
+ ],
704
+ "page_idx": 4
705
+ },
706
+ {
707
+ "type": "text",
708
+ "text": "References",
709
+ "text_level": 1,
710
+ "bbox": [
711
+ 115,
712
+ 824,
713
+ 213,
714
+ 838
715
+ ],
716
+ "page_idx": 4
717
+ },
718
+ {
719
+ "type": "list",
720
+ "sub_type": "ref_text",
721
+ "list_items": [
722
+ "Marah Abdin and The Phi-3 Team. 2024. Phi-3 technical report: A highly capable language model locally on your phone. Preprint, arXiv:2404.14219.",
723
+ "Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang,"
724
+ ],
725
+ "bbox": [
726
+ 115,
727
+ 845,
728
+ 489,
729
+ 921
730
+ ],
731
+ "page_idx": 4
732
+ },
733
+ {
734
+ "type": "list",
735
+ "sub_type": "ref_text",
736
+ "list_items": [
737
+ "Shijie Wang, Jun Tang, Humen Zhong, Yanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. 2025. Qwen2.5-vl technical report. Preprint, arXiv:2502.13923.",
738
+ "Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Săgnak Taşürlar. 2023. Introducing our multimodal models.",
739
+ "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.",
740
+ "Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. 2023. Shikra: Unleashing multimodal llm's referential dialogue magic. arXiv preprint arXiv:2306.15195.",
741
+ "Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Songyang Zhang, Haodong Duan, Wenwei Zhang, Yining Li, Hang Yan, Yang Gao, Zhe Chen, Xinyue Zhang, Wei Li, Jingwen Li, Wenhai Wang, Kai Chen, Conghui He, Xingcheng Zhang, Jifeng Dai, Yu Qiao, Dahua Lin, and Jiaqi Wang. 2024. Internlm-xcomposer2-4khd: A pioneering large vision-language model handling resolutions from 336 pixels to 4k hd. Preprint, arXiv:2404.06512.",
742
+ "Aaron Grattafori and The LLaMA Team. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783.",
743
+ "Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan Wang, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Co-gagent: A visual language model for gui agents. Preprint, arXiv:2312.08914.",
744
+ "Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. 2024. Lisa: Reasoning segmentation via large language model. Preprint, arXiv:2308.00692.",
745
+ "Lianian Harold Li*, Pengchuan Zhang*, Haotian Zhang*, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. 2022. Grounded language-image pre-training. In CVPR.",
746
+ "Junpeng Liu, Tianyue Ou, Yifan Song, Yuxiao Qu, Wai Lam, Chenyan Xiong, Wenhu Chen, Graham Neubig, and Xiang Yue. 2024. Harnessing webpage uis for text-rich visual understanding. Preprint, arXiv:2410.13824.",
747
+ "Tengchao Lv, Yupan Huang, Jingye Chen, Lei Cui, Shuming Ma, Yaoyao Chang, Shaohan Huang, Wenhui Wang, Li Dong, Weiyao Luo, et al. 2023."
748
+ ],
749
+ "bbox": [
750
+ 510,
751
+ 85,
752
+ 884,
753
+ 921
754
+ ],
755
+ "page_idx": 4
756
+ },
757
+ {
758
+ "type": "page_number",
759
+ "text": "31131",
760
+ "bbox": [
761
+ 475,
762
+ 927,
763
+ 522,
764
+ 940
765
+ ],
766
+ "page_idx": 4
767
+ },
768
+ {
769
+ "type": "list",
770
+ "sub_type": "ref_text",
771
+ "list_items": [
772
+ "Kosmos-2.5: A multimodal literate model. arXiv preprint arXiv:2309.11419.",
773
+ "Chuofan Ma, Yi Jiang, Jiannan Wu, Zehuan Yuan, and Xiaojuan Qi. 2024. Groma: Localized visual tokenization for grounding multimodal large language models. arXiv preprint arXiv:2404.13013.",
774
+ "Puneet Mathur, Rajiv Jain, Jiumiang Gu, Franck Dernoncourt, Dinesh Manocha, and Vlad Morariu. 2023. Docedit: Language-guided document editing. In Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI).",
775
+ "OpenAI. 2024. Gpt-4o system card. Preprint, arXiv:2410.21276.",
776
+ "Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar. 2022. Doclaynet: A large human-annotated dataset for documentlayout segmentation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, page 3743-3751. ACM.",
777
+ "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. Preprint, arXiv:2103.00020.",
778
+ "Manan Suri, Puneet Mathur, Franck Dernoncourt, Rajiv Jain, Vlad I Morariu, Ramit Sawhney, Preslav Nakov, and Dinesh Manocha. 2024. DocEdit-v2: Document structure editing via multimodal LLM grounding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15485-15505, Miami, Florida, USA. Association for Computational Linguistics.",
779
+ "Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Cogvlm: Visual expert for pretrained language models. Preprint, arXiv:2311.03079.",
780
+ "Te-Lin Wu, Rajiv Jain, Yufan Zhou, Puneet Mathur, and Vlad I Morariu. 2024. Agent-docedit: Language-instructed LLM agent for content-rich document editing. In First Conference on Language Modeling.",
781
+ "Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, and Yinfei Yang. 2023. Ferret: Refer and ground anything anywhere at any granularity. Preprint, arXiv:2310.07704.",
782
+ "Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, and Zhe Gan. 2024. Ferret-ui: Grounded mobile ui understanding with multimodal llms. Preprint, arXiv:2404.05719."
783
+ ],
784
+ "bbox": [
785
+ 115,
786
+ 85,
787
+ 485,
788
+ 917
789
+ ],
790
+ "page_idx": 5
791
+ },
792
+ {
793
+ "type": "list",
794
+ "sub_type": "ref_text",
795
+ "list_items": [
796
+ "Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang, and Jianke Zhu. 2023. Osprey: Pixel understanding with visual instruction tuning. Preprint, arXiv:2312.10032.",
797
+ "Ao Zhang, Yuan Yao, Wei Ji, Zhiyuan Liu, and Tat-Seng Chua. 2023. Next-chat: An Imm for chat, detection and segmentation. Preprint, arXiv:2311.04498.",
798
+ "Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. Preprint, arXiv:1908.07836."
799
+ ],
800
+ "bbox": [
801
+ 512,
802
+ 85,
803
+ 880,
804
+ 237
805
+ ],
806
+ "page_idx": 5
807
+ },
808
+ {
809
+ "type": "page_number",
810
+ "text": "31132",
811
+ "bbox": [
812
+ 475,
813
+ 928,
814
+ 524,
815
+ 940
816
+ ],
817
+ "page_idx": 5
818
+ },
819
+ {
820
+ "type": "text",
821
+ "text": "We provide additional details here for further clarification.",
822
+ "bbox": [
823
+ 112,
824
+ 84,
825
+ 487,
826
+ 115
827
+ ],
828
+ "page_idx": 6
829
+ },
830
+ {
831
+ "type": "list",
832
+ "sub_type": "text",
833
+ "list_items": [
834
+ "- Element Creation Heuristics (Appendix A)",
835
+ "- Synthetic Data Generation Prompt (Appendix B)",
836
+ "Code-NLI Prompt and Samples (Appendix C)",
837
+ "Data Statistics (Appendix D)",
838
+ "- Model Architecture (Appendix E)",
839
+ "Additional Training Details (Appendix F)",
840
+ "Additional Qualitative Examples (Appendix G)"
841
+ ],
842
+ "bbox": [
843
+ 134,
844
+ 128,
845
+ 489,
846
+ 335
847
+ ],
848
+ "page_idx": 6
849
+ },
850
+ {
851
+ "type": "image",
852
+ "img_path": "images/46a94597db36ae44aab255a15c85781bcea974a3a96ee472a3a25ffc18c9b672.jpg",
853
+ "image_caption": [
854
+ "Figure 6: Sample output from using our heuristic to create lines and list-items."
855
+ ],
856
+ "image_footnote": [],
857
+ "bbox": [
858
+ 122,
859
+ 360,
860
+ 480,
861
+ 705
862
+ ],
863
+ "page_idx": 6
864
+ },
865
+ {
866
+ "type": "text",
867
+ "text": "A Element Creation Heuristics",
868
+ "text_level": 1,
869
+ "bbox": [
870
+ 114,
871
+ 766,
872
+ 400,
873
+ 783
874
+ ],
875
+ "page_idx": 6
876
+ },
877
+ {
878
+ "type": "text",
879
+ "text": "We require annotations for elements in PDFs to create the spatially-aware hierarchical representation. While an element annotation may be present in a dataset, its subelement or parent element may not have the requisite annotation available. We use heuristics to create annotations for such elements. To create lines within a paragraph, we take the bounding box of every word in the paragraph. All",
880
+ "bbox": [
881
+ 112,
882
+ 791,
883
+ 489,
884
+ 921
885
+ ],
886
+ "page_idx": 6
887
+ },
888
+ {
889
+ "type": "text",
890
+ "text": "the words with same y-coordinates are considered to be in the same line. Similarly, to create lists from lines, we compare the x-coordinates of the beginning of the lines. Outliers are considered to be the beginning of a new list item. We illustrate the output from such heuristics to create lines and list-items in Figure 6. While these are simple, they work quite well.",
891
+ "bbox": [
892
+ 507,
893
+ 84,
894
+ 882,
895
+ 212
896
+ ],
897
+ "page_idx": 6
898
+ },
899
+ {
900
+ "type": "text",
901
+ "text": "B Synthetic Data Generation Prompt",
902
+ "text_level": 1,
903
+ "bbox": [
904
+ 507,
905
+ 225,
906
+ 848,
907
+ 242
908
+ ],
909
+ "page_idx": 6
910
+ },
911
+ {
912
+ "type": "text",
913
+ "text": "Once we have a hierarchical representation of the PDF elements along with their corresponding bounding box annotations, we feed them into an LLM to generate the instruction tuning data. Apart from the PDF elements, we input a system prompt containing the guidelines for generating an instruction tuning set and a one-shot example of how the user system conversation should be formatted. The process is illustrated in Figure 7.",
914
+ "bbox": [
915
+ 507,
916
+ 250,
917
+ 884,
918
+ 394
919
+ ],
920
+ "page_idx": 6
921
+ },
922
+ {
923
+ "type": "text",
924
+ "text": "The system prompt contains guidelines describing the task, input, and expected output. It contains instructions to generate diverse and compositional edit requests, to only utilize the information in the given hierarchical PDF representation, to not hallucinate, and so on. In addition, we provide a one-shot example to the LLM to further illustrate our requirements and guide it to follow expected format.",
925
+ "bbox": [
926
+ 507,
927
+ 395,
928
+ 884,
929
+ 538
930
+ ],
931
+ "page_idx": 6
932
+ },
933
+ {
934
+ "type": "text",
935
+ "text": "We provide the system prompt below:",
936
+ "bbox": [
937
+ 527,
938
+ 541,
939
+ 811,
940
+ 556
941
+ ],
942
+ "page_idx": 6
943
+ },
944
+ {
945
+ "type": "text",
946
+ "text": "You are an AI visual assistance that can analyze PDFs. You will receive information describing a pdf, where each pdf element (text, paragraph, header, footer, chart etc.) detection is represented in a json format. Each element is denoted by its type, followed by its bounding box coordinates, followed by its value and/or its subelements. Bounding box coordinates are represented as $(x1, y1, x2, y2)$ . These values correspond to the top left $x$ , top left $y$ , bottom right $x$ , and bottom right $y$ . The subelements are formatted and indented with spaces. The content of a pdf element is found at the 'value' key. The pdf elements are organized in the order in which they are present in the pdf: top to bottom, left to right. The pdfs can be single column, double column or multiple columns. Judge the number of columns in the pdfs by looking at the relative positioning of the pdf elements' $x1$ .",
947
+ "bbox": [
948
+ 510,
949
+ 567,
950
+ 880,
951
+ 760
952
+ ],
953
+ "page_idx": 6
954
+ },
955
+ {
956
+ "type": "text",
957
+ "text": "Using the provided texts and coordinates, design question and answer pairs simulating the interactions between a user and system. Conversations should focus on potential user edit requests on the pdf elements (vs perception). Please follow the guidelines below while generating the conversations:",
958
+ "bbox": [
959
+ 510,
960
+ 766,
961
+ 878,
962
+ 835
963
+ ],
964
+ "page_idx": 6
965
+ },
966
+ {
967
+ "type": "list",
968
+ "sub_type": "text",
969
+ "list_items": [
970
+ "1. The edit requests should be diverse, requiring different kind of editing like adding, deleting, changing, modifying, swapping, moving, replacing, merging, splitting, and so on.",
971
+ "2. Please refer to the pdf-element that needs editing in diverse compositional ways. For example, $\\backslash$ delete last line of the third"
972
+ ],
973
+ "bbox": [
974
+ 510,
975
+ 841,
976
+ 877,
977
+ 919
978
+ ],
979
+ "page_idx": 6
980
+ },
981
+ {
982
+ "type": "page_number",
983
+ "text": "31133",
984
+ "bbox": [
985
+ 475,
986
+ 927,
987
+ 524,
988
+ 940
989
+ ],
990
+ "page_idx": 6
991
+ },
992
+ {
993
+ "type": "text",
994
+ "text": "System Prompt",
995
+ "text_level": 1,
996
+ "bbox": [
997
+ 144,
998
+ 99,
999
+ 211,
1000
+ 108
1001
+ ],
1002
+ "page_idx": 7
1003
+ },
1004
+ {
1005
+ "type": "text",
1006
+ "text": "You are an AI visual assistance that can analyze PDFs. You will receive information describing a pdf, where each pdf element (text, paragraph, header, footer, chart etc.) detection is represented in a json format. Each element is denoted by its type, followed by its bounding box coordinates, followed by its value and/or its subelements. Bounding box coordinates are represented as (x1, y1, x2, y2). These values correspond to the top left $x$ , top left $y$ , bottom right $x$ , and bottom right $y$ . The subelements are formatted and indented with spaces. The content of a pdf element is found at the 'value' key. The pdf elements are organized in the order in which they are present in the pdf: top to bottom, left to right. The pdfs can be single column, double column or multiple columns. Judge the number of columns in the pdfs by looking at the relative positioning of the pdf elements' $x_1$ . Using the provided texts and coordinates, design question and answer pairs simulating the interactions between a user and system. Conversations should focus on potential user edit requests on the pdf elements (vs perception).",
1007
+ "bbox": [
1008
+ 144,
1009
+ 109,
1010
+ 383,
1011
+ 236
1012
+ ],
1013
+ "page_idx": 7
1014
+ },
1015
+ {
1016
+ "type": "text",
1017
+ "text": "Please follow the guidelines below while generating the conversations:",
1018
+ "bbox": [
1019
+ 144,
1020
+ 241,
1021
+ 352,
1022
+ 255
1023
+ ],
1024
+ "page_idx": 7
1025
+ },
1026
+ {
1027
+ "type": "text",
1028
+ "text": "1. The edit requests should be diverse, requiring different kind of editing like adding, deleting, changing, modifying, swapping, moving, replacing, merging, splitting, and so on. \n2. Please refer to the pdf-element that needs editing in diverse compositional ways. For example, \"delete last line of the third paragraph\" or \"move the table in between the two figures from top to bottom\".",
1029
+ "bbox": [
1030
+ 144,
1031
+ 256,
1032
+ 381,
1033
+ 306
1034
+ ],
1035
+ "page_idx": 7
1036
+ },
1037
+ {
1038
+ "type": "text",
1039
+ "text": "One Shot Example",
1040
+ "text_level": 1,
1041
+ "bbox": [
1042
+ 393,
1043
+ 87,
1044
+ 468,
1045
+ 95
1046
+ ],
1047
+ "page_idx": 7
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "text": "<user>",
1052
+ "bbox": [
1053
+ 396,
1054
+ 96,
1055
+ 420,
1056
+ 102
1057
+ ],
1058
+ "page_idx": 7
1059
+ },
1060
+ {
1061
+ "type": "text",
1062
+ "text": "element':text",
1063
+ "bbox": [
1064
+ 396,
1065
+ 103,
1066
+ 447,
1067
+ 108
1068
+ ],
1069
+ "page_idx": 7
1070
+ },
1071
+ {
1072
+ "type": "text",
1073
+ "text": "'bbox': [84, 382, 484, 413],",
1074
+ "bbox": [
1075
+ 396,
1076
+ 109,
1077
+ 482,
1078
+ 115
1079
+ ],
1080
+ "page_idx": 7
1081
+ },
1082
+ {
1083
+ "type": "text",
1084
+ "text": "subelements: [element: line,",
1085
+ "bbox": [
1086
+ 396,
1087
+ 116,
1088
+ 517,
1089
+ 122
1090
+ ],
1091
+ "page_idx": 7
1092
+ },
1093
+ {
1094
+ "type": "list",
1095
+ "sub_type": "text",
1096
+ "list_items": [
1097
+ "'value' :interactions were then functionally annotated for",
1098
+ "value : Interactions were the"
1099
+ ],
1100
+ "bbox": [
1101
+ 431,
1102
+ 123,
1103
+ 616,
1104
+ 135
1105
+ ],
1106
+ "page_idx": 7
1107
+ },
1108
+ {
1109
+ "type": "text",
1110
+ "text": "element: line",
1111
+ "bbox": [
1112
+ 428,
1113
+ 137,
1114
+ 480,
1115
+ 143
1116
+ ],
1117
+ "page_idx": 7
1118
+ },
1119
+ {
1120
+ "type": "text",
1121
+ "text": "\"Bbox:[84,397,377,414]",
1122
+ "bbox": [
1123
+ 428,
1124
+ 143,
1125
+ 517,
1126
+ 149
1127
+ ],
1128
+ "page_idx": 7
1129
+ },
1130
+ {
1131
+ "type": "text",
1132
+ "text": "'value': 'functional processes and pathways involved.'].",
1133
+ "bbox": [
1134
+ 428,
1135
+ 149,
1136
+ 611,
1137
+ 156
1138
+ ],
1139
+ "page_idx": 7
1140
+ },
1141
+ {
1142
+ "type": "text",
1143
+ "text": "element':text'",
1144
+ "bbox": [
1145
+ 396,
1146
+ 156,
1147
+ 482,
1148
+ 162
1149
+ ],
1150
+ "page_idx": 7
1151
+ },
1152
+ {
1153
+ "type": "text",
1154
+ "text": "'bbox': [84, 410, 484, 498],",
1155
+ "bbox": [
1156
+ 396,
1157
+ 162,
1158
+ 482,
1159
+ 168
1160
+ ],
1161
+ "page_idx": 7
1162
+ },
1163
+ {
1164
+ "type": "text",
1165
+ "text": "ents': ['element': 'line',",
1166
+ "bbox": [
1167
+ 428,
1168
+ 170,
1169
+ 499,
1170
+ 175
1171
+ ],
1172
+ "page_idx": 7
1173
+ },
1174
+ {
1175
+ "type": "text",
1176
+ "text": "'bbox': [109, 411, 484, 428],...",
1177
+ "bbox": [
1178
+ 428,
1179
+ 175,
1180
+ 526,
1181
+ 181
1182
+ ],
1183
+ "page_idx": 7
1184
+ },
1185
+ {
1186
+ "type": "text",
1187
+ "text": "system>",
1188
+ "text_level": 1,
1189
+ "bbox": [
1190
+ 396,
1191
+ 181,
1192
+ 428,
1193
+ 187
1194
+ ],
1195
+ "page_idx": 7
1196
+ },
1197
+ {
1198
+ "type": "list",
1199
+ "sub_type": "text",
1200
+ "list_items": [
1201
+ "User: Locate where changes are required to perform this query, 'Move the top figure from the center of the page to left'.",
1202
+ "System: The figure that is required to move is located at [241, 88, 756, 323]."
1203
+ ],
1204
+ "bbox": [
1205
+ 393,
1206
+ 187,
1207
+ 620,
1208
+ 212
1209
+ ],
1210
+ "page_idx": 7
1211
+ },
1212
+ {
1213
+ "type": "list",
1214
+ "sub_type": "text",
1215
+ "list_items": [
1216
+ "User: Find the specific location requiring changes to perform this request. \"Underline the title.\" Results and Discussion",
1217
+ "System: The title \"Results and Discussion\" can be found at [513, 675, 721, 691]."
1218
+ ],
1219
+ "bbox": [
1220
+ 393,
1221
+ 218,
1222
+ 618,
1223
+ 242
1224
+ ],
1225
+ "page_idx": 7
1226
+ },
1227
+ {
1228
+ "type": "text",
1229
+ "text": "Detections",
1230
+ "text_level": 1,
1231
+ "bbox": [
1232
+ 393,
1233
+ 246,
1234
+ 438,
1235
+ 253
1236
+ ],
1237
+ "page_idx": 7
1238
+ },
1239
+ {
1240
+ "type": "text",
1241
+ "text": "'eleme",
1242
+ "bbox": [
1243
+ 396,
1244
+ 254,
1245
+ 420,
1246
+ 260
1247
+ ],
1248
+ "page_idx": 7
1249
+ },
1250
+ {
1251
+ "type": "text",
1252
+ "text": "'bbox': [76, 76, 179, 95],",
1253
+ "bbox": [
1254
+ 396,
1255
+ 261,
1256
+ 473,
1257
+ 266
1258
+ ],
1259
+ "page_idx": 7
1260
+ },
1261
+ {
1262
+ "type": "text",
1263
+ "text": "'subele",
1264
+ "bbox": [
1265
+ 396,
1266
+ 266,
1267
+ 420,
1268
+ 271
1269
+ ],
1270
+ "page_idx": 7
1271
+ },
1272
+ {
1273
+ "type": "text",
1274
+ "text": "'element': 'line'",
1275
+ "bbox": [
1276
+ 431,
1277
+ 273,
1278
+ 480,
1279
+ 278
1280
+ ],
1281
+ "page_idx": 7
1282
+ },
1283
+ {
1284
+ "type": "text",
1285
+ "text": "'bbox': [76, 76, 179, 94],",
1286
+ "bbox": [
1287
+ 431,
1288
+ 279,
1289
+ 512,
1290
+ 285
1291
+ ],
1292
+ "page_idx": 7
1293
+ },
1294
+ {
1295
+ "type": "text",
1296
+ "text": "value':AppendDIX}\\}},",
1297
+ "bbox": [
1298
+ 431,
1299
+ 285,
1300
+ 504,
1301
+ 290
1302
+ ],
1303
+ "page_idx": 7
1304
+ },
1305
+ {
1306
+ "type": "text",
1307
+ "text": "'elem",
1308
+ "bbox": [
1309
+ 396,
1310
+ 291,
1311
+ 413,
1312
+ 296
1313
+ ],
1314
+ "page_idx": 7
1315
+ },
1316
+ {
1317
+ "type": "text",
1318
+ "text": "'bbox': [76, 123, 489, 183]",
1319
+ "bbox": [
1320
+ 396,
1321
+ 297,
1322
+ 482,
1323
+ 303
1324
+ ],
1325
+ "page_idx": 7
1326
+ },
1327
+ {
1328
+ "type": "text",
1329
+ "text": "'subelements': I",
1330
+ "bbox": [
1331
+ 396,
1332
+ 303,
1333
+ 448,
1334
+ 307
1335
+ ],
1336
+ "page_idx": 7
1337
+ },
1338
+ {
1339
+ "type": "list",
1340
+ "sub_type": "text",
1341
+ "list_items": [
1342
+ "element: line.",
1343
+ "'bbox':[76,122,485,139]",
1344
+ "'value': 'As stated in Section 2.1, the..."
1345
+ ],
1346
+ "bbox": [
1347
+ 431,
1348
+ 309,
1349
+ 557,
1350
+ 326
1351
+ ],
1352
+ "page_idx": 7
1353
+ },
1354
+ {
1355
+ "type": "list",
1356
+ "sub_type": "text",
1357
+ "list_items": [
1358
+ "User: Locate where changes are required to perform this query, 'Add \"Introduction\" as a new title before \"Appendix\".",
1359
+ "System: The location where \"Appendix\" is found is at [76, 76, 179, 95].",
1360
+ "User: Find the specific location requiring changes to perform this request, 'Delete the third list item under \"A.1.Environment\":",
1361
+ "System: The third list item under \"A.1.Environment\" can be found at [92, 398, 485, 459].",
1362
+ "User: Find the place that needs to be edited to",
1363
+ "process this query, 'Move the list under \"A.1.4.\"",
1364
+ "\"Brain-Body Interface\" to the bottom of the page!",
1365
+ "System: The list under \"A.1.4. Brain-Body Interface\" can be found at [514, 175, 926, 531].",
1366
+ "User: Please find the area that needs editing to carry out this user request, 'Merge the two list items under \"A.1.5. Experiment\".",
1367
+ "System: The two list items under \"A.1.5. Experiment\" can be found at [530, 568, 923, 658] and [530, 657, 923, 806].",
1368
+ "User: Please locate the part that needs modification to process this query, \"Swap the order of the first two list items under \"A.1. Design and Editing\".",
1369
+ "System: The first two list items under \"A.1. Design and Editing\" can be found at [92, 236, 485, 341] and [92, 339, 485, 400]."
1370
+ ],
1371
+ "bbox": [
1372
+ 678,
1373
+ 87,
1374
+ 853,
1375
+ 329
1376
+ ],
1377
+ "page_idx": 7
1378
+ },
1379
+ {
1380
+ "type": "text",
1381
+ "text": "Figure 7: Illustration of LLM prompting to generate synthetic instruction tuning data.",
1382
+ "bbox": [
1383
+ 205,
1384
+ 344,
1385
+ 786,
1386
+ 360
1387
+ ],
1388
+ "page_idx": 7
1389
+ },
1390
+ {
1391
+ "type": "text",
1392
+ "text": "paragraph\\\" or \\\\\" move the table in between the two figures from top to bottom \\\".",
1393
+ "bbox": [
1394
+ 149,
1395
+ 379,
1396
+ 482,
1397
+ 399
1398
+ ],
1399
+ "page_idx": 7
1400
+ },
1401
+ {
1402
+ "type": "list",
1403
+ "sub_type": "text",
1404
+ "list_items": [
1405
+ "3. Please emphasize edit requests that deal with more abstract pdf elements rather low level elements like words and lines.",
1406
+ "4. Use the x and y coordinates of the pdf-elements to figure out the relative spatial relations left of, right of.",
1407
+ "top, bottom or to assess which paragraph is first, second or third.",
1408
+ "5. Please count the number and order of pdf elements sequentially and correctly. Mistakes in counting are costly. Please think step by step while doing it.",
1409
+ "6. Please do not generate edit requests that implies modifying multiple pdf elements when there exists actually only one, for example, asking to swap the order of list-items when acutely there is a single list item.",
1410
+ "7. Please do not generate edits that call for left/right column, where there exists only a single column.",
1411
+ "8. In the presence of list, please generate at least one edit requests on list bullets. The output then should also refer only to the bullets, not the whole list. Check to see whether the bullets are numeric, alphabetic, or plain circular bullets before requesting its editing.",
1412
+ "9. Please geenrate the minimal bounding box that suffices to make the edit, i.e. ground the user request to the most fine-grained pdf-element.",
1413
+ "10. Please generate multiple back and forth conversations. The user query should follow this format: \"<phrase requesting the location>, \\\"<user edit query>\\\". For example: 'Please find the location to make this edit, \\ \"Change list bullets to numeric\\\".",
1414
+ "11. Only generate questions and answers that can be answered with the given information, do not use background information.",
1415
+ "12. Please do not hallucinate pdf elements, its content, its subelements or edit requests.",
1416
+ "13. It's better to not generate requests instead of generating wrong requests."
1417
+ ],
1418
+ "bbox": [
1419
+ 115,
1420
+ 407,
1421
+ 482,
1422
+ 901
1423
+ ],
1424
+ "page_idx": 7
1425
+ },
1426
+ {
1427
+ "type": "text",
1428
+ "text": "C Code-NLI Prompt and Samples",
1429
+ "text_level": 1,
1430
+ "bbox": [
1431
+ 509,
1432
+ 374,
1433
+ 821,
1434
+ 392
1435
+ ],
1436
+ "page_idx": 7
1437
+ },
1438
+ {
1439
+ "type": "text",
1440
+ "text": "To clean noisy synthetic data, we employ Code-NLI. The prompt used for Code-NLI includes a system prompt, that contains Python API function definitions, and few-shot prompts. We provide the system prompt given below:",
1441
+ "bbox": [
1442
+ 507,
1443
+ 400,
1444
+ 884,
1445
+ 481
1446
+ ],
1447
+ "page_idx": 7
1448
+ },
1449
+ {
1450
+ "type": "list",
1451
+ "sub_type": "text",
1452
+ "list_items": [
1453
+ "You are an AI visual assistant that can do Natural Language Inference using python code. You will receive information describing a hypothesis. This includes a user edit request on pdf and a system output specifying the edit location with a bounding box. Your job is to create a python code, using listed APIs, that verifies that the bounding box correctly localizes the edit target in the user request. Each statement in the python code needs to be true to verify the correctness of the hypothesis.",
1454
+ "The user edit request focuses on editing different type of pdf elements: title, paragraph, list, figure, and table. The elements could have subelements: title -> {bullets -> {bullet}, list_item -> {line}}, paragraph -> {line}, title -> {line}. It refers to different elements in a compositional and hierarchical manner. For example, last line of second paragraph, first list-item of third list, etc. The system output localizes the element/subelement requiring edit with a bounding box. The format of the bounding box is [x1,y1,x2,y2] where (x1,y1) is the coordinate of top left corner and (x2,y2) is bottom right corner, where x2>x1 and y2>y1.",
1455
+ "Please use the following APIs to verify that the output bounding box indeed correctly captures the edit element/subelement. Please output only the python code using the listed APIs and nothing else. The element (dict) follows this format: {element_type: <element_type>,_bbox: [<x1,y1,x2,y2>]}, value: <value>, subelements:[<sub>subelement1,..]}.",
1456
+ "def is_element.element_type: str, bbox: list) -> bool:\n \"checks whether the element in_bbox is element_type\"\n \"Args: element_type (str): any element type ' title', 'paragraph', 'list', 'line', 'figure', 'table', 'bullets', '"
1457
+ ],
1458
+ "bbox": [
1459
+ 510,
1460
+ 492,
1461
+ 878,
1462
+ 920
1463
+ ],
1464
+ "page_idx": 7
1465
+ },
1466
+ {
1467
+ "type": "page_number",
1468
+ "text": "31134",
1469
+ "bbox": [
1470
+ 475,
1471
+ 927,
1472
+ 524,
1473
+ 940
1474
+ ],
1475
+ "page_idx": 7
1476
+ },
1477
+ {
1478
+ "type": "code",
1479
+ "sub_type": "code",
1480
+ "code_caption": [],
1481
+ "code_body": "def in_sectionsection section_title: str, bbox: list) -> bool:\n '''checks whether the bbox is in the section_title''' \n '''Args: section_title (str): the string contained in the title \n bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n ''' \ndef contains_string (string: str,_bbox: list) -> bool:\n '''checks whether the element in_bbox contains string''' \n '''Args: string (str): string to check \n _bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n ''' \ndef check_element_order_from_top_or_title( top_or_title (str), element_type: str,_bbox: \n list, order: int, column: str = None) -> bool: \n '''checks the sequence order of the element in \n _bbox of element_type from top of pdf or the section title where it lies''' \n '''Args: \n top_or_title (str): Either 'top' or 'title' from where to beign counting \n element_type (str): First hierarchy elements -- 'title', 'paragraph', 'list', 'figure' or 'table' \n _bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n order (int): denoting the sequence order of bounding box of element_type. 1,2,3 order counts from top, -1,-2.. order counts from last \n column (str): to check the order in left or right column. Defaults to None when pdf is single column. \n */\ndef check_subelement_order_from_top_or_title( top_or_title (str), subelement_type: str,_bbox: \n list, order: int, column: str = None) -> int: \n '''checks the sequence order of the subelement in bounding box of subelement_type from top of pdf or the section title where it lies''' \n '''Args: \n top_or_title (str): Either 'top' or 'title' from where to beign counting \n subelement_type (str): Second hierarchy elements -- 'line' (title and paragraph subelement), \\ 'bullets' (list subelement), '\\list_item' (list subelement) \n _bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n order (int): denoting the sequence order of bounding box of subelement_type. 1,2,3 order counts from top, -1,-2.. order counts from last \n column (str): to check the order in left or right column. Defaults to None when pdf is single column. \n */\ndef check_subelement_order_in_element( subelement_type: str,_bbox: list, order: int) -> int: \n '''checks the sequence order of the subelement in bounding box of subelement_type within the element in which_bbox lies''' \n '''Args: \n subelement_type (str): Second hierarchy elements -- 'line' (title and paragraph subelement), 'bullets' (list subelement), 'list_item' (list subelement) \n _bbox (list): a bounding box of the type [",
1482
+ "guess_lang": "python",
1483
+ "bbox": [
1484
+ 109,
1485
+ 86,
1486
+ 497,
1487
+ 914
1488
+ ],
1489
+ "page_idx": 8
1490
+ },
1491
+ {
1492
+ "type": "code",
1493
+ "sub_type": "code",
1494
+ "code_caption": [],
1495
+ "code_body": "x1, y1, x2, y2]\norder (int): denoting the sequence order of_bbox of subelement_type. 1,2,3\norder counts from top, -1,-2.. order counts from last\n\" \"\ndef check_subsubelement_order_from_top_or_title(top_or_title (str), subsublement_type: str,_bbox: list, order: int, column: str = None) -> int:\n \" checks the sequence order of the subsublement in_bbox of subsublement_type from top of pdf or the section title where it lies\"\n \"Args:\n top_or_title (str): Either 'top' or 'title' from where to beign counting subsublement_type (str): Third hierarchy elements -- 'bullet' (subsublement of bullets), 'line' (subsublement of list_item)\nbbox (list): a bounding box of the type [x1, y1, x2, y2]\norder (int): denoting the sequence order of_bbox of subelement_type. 1,2,3 order counts from top, -1,-2.. order counts from last column (str): to check the order in left or right column. Defaults to None when pdf is single column.\n\" \"\ndef check_subsubelement_order_in_subelement(subsublement_type: str,_bbox: list, order: int) -> int:\n \" checks the sequence order of the subsublement in_bbox of subsublement_type within the subelement in which_bbox lies\"\n \"Args:\n subsublement_type (str): Third hierarchy elements -- 'bullet' (subsublement of bullets), 'line' (subsublement of list_item)\nbbox (list): a bounding box of the type [x1, y1, x2, y2]\norder (int): denoting the sequence order of_bbox of subsublement_type. 1,2,3 order counts from top, -1,-2.. order counts from last",
1496
+ "guess_lang": "python",
1497
+ "bbox": [
1498
+ 500,
1499
+ 86,
1500
+ 890,
1501
+ 590
1502
+ ],
1503
+ "page_idx": 8
1504
+ },
1505
+ {
1506
+ "type": "text",
1507
+ "text": "We also include some sample Python code generated by Code-NLI for verification in Figure 8.",
1508
+ "bbox": [
1509
+ 507,
1510
+ 596,
1511
+ 884,
1512
+ 629
1513
+ ],
1514
+ "page_idx": 8
1515
+ },
1516
+ {
1517
+ "type": "text",
1518
+ "text": "D Dataset Statistics",
1519
+ "text_level": 1,
1520
+ "bbox": [
1521
+ 509,
1522
+ 642,
1523
+ 697,
1524
+ 657
1525
+ ],
1526
+ "page_idx": 8
1527
+ },
1528
+ {
1529
+ "type": "text",
1530
+ "text": "The total size of the synthetically generated data is 3M samples. The topic range is the same as the topic distribution of the base datasets: scientific articles, patents, finance, tenders, laws, and manuals. Similarly, image resolution is the same as base datasets: $1025 \\times 1025$ for DocLayNet and $800 \\times 600$ for PubLayNet.",
1531
+ "bbox": [
1532
+ 507,
1533
+ 670,
1534
+ 884,
1535
+ 782
1536
+ ],
1537
+ "page_idx": 8
1538
+ },
1539
+ {
1540
+ "type": "text",
1541
+ "text": "E Model Architecture",
1542
+ "text_level": 1,
1543
+ "bbox": [
1544
+ 509,
1545
+ 797,
1546
+ 717,
1547
+ 812
1548
+ ],
1549
+ "page_idx": 8
1550
+ },
1551
+ {
1552
+ "type": "text",
1553
+ "text": ") Our model, DELOC, is based on Phi-3-V model. The Phi-3-V model's architecture is essentially composed of two components - an image encoder (CLIP ViT-L/14 (Radford et al., 2021)) and a transformer decoder (phi-3-mini). The visual tokens from the image encoder and concatenate with text",
1554
+ "bbox": [
1555
+ 507,
1556
+ 824,
1557
+ 884,
1558
+ 921
1559
+ ],
1560
+ "page_idx": 8
1561
+ },
1562
+ {
1563
+ "type": "page_number",
1564
+ "text": "31135",
1565
+ "bbox": [
1566
+ 475,
1567
+ 927,
1568
+ 524,
1569
+ 940
1570
+ ],
1571
+ "page_idx": 8
1572
+ },
1573
+ {
1574
+ "type": "text",
1575
+ "text": "User: Please locate the part that needs to be edited to process this query, 'Delete the last line of the last paragraph in the left column'.",
1576
+ "bbox": [
1577
+ 146,
1578
+ 87,
1579
+ 442,
1580
+ 127
1581
+ ],
1582
+ "page_idx": 9
1583
+ },
1584
+ {
1585
+ "type": "text",
1586
+ "text": "System: The last paragraph in the left column is located at [84, 654, 484, 813] and the last line is at [84, 797, 140, 814].",
1587
+ "bbox": [
1588
+ 146,
1589
+ 137,
1590
+ 445,
1591
+ 175
1592
+ ],
1593
+ "page_idx": 9
1594
+ },
1595
+ {
1596
+ "type": "image",
1597
+ "img_path": "images/a7ca97b45a5e4edf62988dd8e494e6dcc3ce4a209cc0416d0e09faa1672a2d20.jpg",
1598
+ "image_caption": [],
1599
+ "image_footnote": [],
1600
+ "bbox": [
1601
+ 460,
1602
+ 112,
1603
+ 529,
1604
+ 156
1605
+ ],
1606
+ "page_idx": 9
1607
+ },
1608
+ {
1609
+ "type": "text",
1610
+ "text": "is_element('list_item', [92,398,485,459]) \nin_section('A.1.Environment', [92,398,485,459]) \ncheck_subelement_order_in_element('list_item', [92,398,485,459], 2)",
1611
+ "bbox": [
1612
+ 534,
1613
+ 87,
1614
+ 813,
1615
+ 177
1616
+ ],
1617
+ "page_idx": 9
1618
+ },
1619
+ {
1620
+ "type": "text",
1621
+ "text": "User: Please find the location that needs to be edited to process this query, 'Delete the reference \"[23]\" from the second paragraph in the left column'.",
1622
+ "bbox": [
1623
+ 146,
1624
+ 200,
1625
+ 445,
1626
+ 250
1627
+ ],
1628
+ "page_idx": 9
1629
+ },
1630
+ {
1631
+ "type": "text",
1632
+ "text": "System: The second paragraph in the left column is located at [84, 410, 484, 498] and the reference \"[23]\" is at [84, 482, 167, 499].",
1633
+ "bbox": [
1634
+ 146,
1635
+ 263,
1636
+ 433,
1637
+ 302
1638
+ ],
1639
+ "page_idx": 9
1640
+ },
1641
+ {
1642
+ "type": "image",
1643
+ "img_path": "images/6937a613b7a1c3104cba624cf2eed197664f770b1ee7fcbcd80f1db17754f3be.jpg",
1644
+ "image_caption": [],
1645
+ "image_footnote": [],
1646
+ "bbox": [
1647
+ 457,
1648
+ 227,
1649
+ 529,
1650
+ 269
1651
+ ],
1652
+ "page_idx": 9
1653
+ },
1654
+ {
1655
+ "type": "text",
1656
+ "text": "is_element('paragraph', [84, 410, 484, 498]) \ncheck_element_order_from_top_or_title('top', 'paragraph', [84, 410, 484, 498], 2, column='left') \ncontains_string(['[23]', [84, 482, 167, 499])",
1657
+ "bbox": [
1658
+ 536,
1659
+ 200,
1660
+ 803,
1661
+ 302
1662
+ ],
1663
+ "page_idx": 9
1664
+ },
1665
+ {
1666
+ "type": "image",
1667
+ "img_path": "images/aeac7283b8e342b7489f868ad18d6a6956cb6f467918627ef010a991abdef461.jpg",
1668
+ "image_caption": [
1669
+ "Figure 8: Sample verification code generated by Code-NLI."
1670
+ ],
1671
+ "image_footnote": [
1672
+ "The length of the table \"Exhibit 2\" is increased. The spread of the table is increased towards the left margin.."
1673
+ ],
1674
+ "bbox": [
1675
+ 126,
1676
+ 343,
1677
+ 344,
1678
+ 556
1679
+ ],
1680
+ "page_idx": 9
1681
+ },
1682
+ {
1683
+ "type": "image",
1684
+ "img_path": "images/a97ad259172bccbbc8ad15338ef927fa6b6241ab34fece8369cc737a9f929eed.jpg",
1685
+ "image_caption": [
1686
+ "Figure 9: Additional Qualitative sample prediction from DELOC demonstrating its capabilities."
1687
+ ],
1688
+ "image_footnote": [
1689
+ "Moved the text \"Golden Chopstick, Takeaway menu\" above the purple line"
1690
+ ],
1691
+ "bbox": [
1692
+ 354,
1693
+ 344,
1694
+ 598,
1695
+ 568
1696
+ ],
1697
+ "page_idx": 9
1698
+ },
1699
+ {
1700
+ "type": "image",
1701
+ "img_path": "images/00d5d6cd15d5ff3ebc3611e090cd1b138088bd89b6b1835a0a05884bb0263566.jpg",
1702
+ "image_caption": [],
1703
+ "image_footnote": [
1704
+ "GT: Pred:"
1705
+ ],
1706
+ "bbox": [
1707
+ 605,
1708
+ 344,
1709
+ 873,
1710
+ 562
1711
+ ],
1712
+ "page_idx": 9
1713
+ },
1714
+ {
1715
+ "type": "text",
1716
+ "text": "tokens in an interleaved manner and fed to the transformer decoder to output the prediction. In total, the number of parameters totals 4.2B.",
1717
+ "bbox": [
1718
+ 112,
1719
+ 642,
1720
+ 487,
1721
+ 688
1722
+ ],
1723
+ "page_idx": 9
1724
+ },
1725
+ {
1726
+ "type": "text",
1727
+ "text": "The Phi-3-mini is a transformer decoder with 3072 hidden dimension, 32 heads and 32 layers.",
1728
+ "bbox": [
1729
+ 112,
1730
+ 690,
1731
+ 485,
1732
+ 722
1733
+ ],
1734
+ "page_idx": 9
1735
+ },
1736
+ {
1737
+ "type": "text",
1738
+ "text": "F Additional Training Details",
1739
+ "text_level": 1,
1740
+ "bbox": [
1741
+ 112,
1742
+ 734,
1743
+ 386,
1744
+ 750
1745
+ ],
1746
+ "page_idx": 9
1747
+ },
1748
+ {
1749
+ "type": "text",
1750
+ "text": "The input image to our model is resized to $900 \\times 900$ . The text-heavy nature of PDF images requires that the model handle image resolutions dynamically according to the resolution of the text. Higher resolution text should result in higher image input resolution, and vice versa. This is taken care of by the dynamic cropping strategy (Dong et al., 2024) in the Phi-3-V model. It allows the input image to be dynamically cropped into a 2d array of blocks, which are then concatenated to represent the whole",
1751
+ "bbox": [
1752
+ 112,
1753
+ 760,
1754
+ 489,
1755
+ 920
1756
+ ],
1757
+ "page_idx": 9
1758
+ },
1759
+ {
1760
+ "type": "text",
1761
+ "text": "image.",
1762
+ "bbox": [
1763
+ 509,
1764
+ 642,
1765
+ 564,
1766
+ 657
1767
+ ],
1768
+ "page_idx": 9
1769
+ },
1770
+ {
1771
+ "type": "text",
1772
+ "text": "G Additional Qualitative Examples",
1773
+ "text_level": 1,
1774
+ "bbox": [
1775
+ 507,
1776
+ 670,
1777
+ 831,
1778
+ 686
1779
+ ],
1780
+ "page_idx": 9
1781
+ },
1782
+ {
1783
+ "type": "text",
1784
+ "text": "We add more qualitative samples to further illustrate the capabilities of DELOC. As can be seen in Figure 9, DELOC is able to ground user edit requests well both for PDFs and design documents.",
1785
+ "bbox": [
1786
+ 507,
1787
+ 695,
1788
+ 884,
1789
+ 759
1790
+ ],
1791
+ "page_idx": 9
1792
+ },
1793
+ {
1794
+ "type": "page_number",
1795
+ "text": "31136",
1796
+ "bbox": [
1797
+ 475,
1798
+ 927,
1799
+ 524,
1800
+ 940
1801
+ ],
1802
+ "page_idx": 9
1803
+ }
1804
+ ]
2025/DELOC_ Document Element Localizer/b1ff4230-9685-4b29-9a1f-525a8461a4fb_model.json ADDED
@@ -0,0 +1,2585 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "title",
5
+ "bbox": [
6
+ 0.297,
7
+ 0.091,
8
+ 0.703,
9
+ 0.11
10
+ ],
11
+ "angle": 0,
12
+ "content": "DELOC: Document Element Localizer"
13
+ },
14
+ {
15
+ "type": "title",
16
+ "bbox": [
17
+ 0.152,
18
+ 0.143,
19
+ 0.851,
20
+ 0.162
21
+ ],
22
+ "angle": 0,
23
+ "content": "Hammad Ayyubi\\(^{1*}\\), Puneet Mathur\\(^{2}\\), Md Mehrab Tanjim\\(^{2}\\), Vlad I Morariu\\(^{2}\\)"
24
+ },
25
+ {
26
+ "type": "text",
27
+ "bbox": [
28
+ 0.325,
29
+ 0.164,
30
+ 0.674,
31
+ 0.182
32
+ ],
33
+ "angle": 0,
34
+ "content": "\\(^{1}\\)Columbia University, \\(^{2}\\)Adobe Research,"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.359,
40
+ 0.184,
41
+ 0.64,
42
+ 0.198
43
+ ],
44
+ "angle": 0,
45
+ "content": "Correspondence: hayyubi@cs.columbia.edu"
46
+ },
47
+ {
48
+ "type": "title",
49
+ "bbox": [
50
+ 0.261,
51
+ 0.261,
52
+ 0.342,
53
+ 0.277
54
+ ],
55
+ "angle": 0,
56
+ "content": "Abstract"
57
+ },
58
+ {
59
+ "type": "text",
60
+ "bbox": [
61
+ 0.142,
62
+ 0.289,
63
+ 0.461,
64
+ 0.658
65
+ ],
66
+ "angle": 0,
67
+ "content": "Editing documents and PDFs using natural language instructions is desirable for many reasons - ease of use, increasing accessibility to non-technical users, and for creativity. To do this automatically, a system needs to first understand the user's intent and convert this to an executable plan or command, and then the system needs to identify or localize the elements that the user desires to edit. While there exist methods that can accomplish these tasks, a major bottleneck in these systems is the inability to ground the spatial edit location effectively. We address this gap through our proposed system, DELOC (Document Element LOCALizer). DELOC adapts the grounding capabilities of existing Multimodal Large Language Model (MLLM) from natural images to PDFs. This adaptation involves two novel contributions: 1) synthetically generating PDF-grounding instruction tuning data from partially annotated datasets; and 2) synthetic data cleaning via Code-NLI, an NLI-inspired process to clean data using generated Python code. The effectiveness of DELOC is apparent in the \\(>2\\mathrm{x}\\) zero-shot improvement it achieves over the next best MLLM, GPT-4o."
68
+ },
69
+ {
70
+ "type": "title",
71
+ "bbox": [
72
+ 0.115,
73
+ 0.671,
74
+ 0.262,
75
+ 0.685
76
+ ],
77
+ "angle": 0,
78
+ "content": "1 Introduction"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.113,
84
+ 0.696,
85
+ 0.489,
86
+ 0.791
87
+ ],
88
+ "angle": 0,
89
+ "content": "Editing documents and PDFs via natural language (Mathur et al., 2023; Suri et al., 2024) is an innovative and user-friendly advancement. This technology makes it easier for non-technical people to edit PDFs, speeds up the editing process, and facilitates document editing on a mobile device."
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.113,
95
+ 0.793,
96
+ 0.489,
97
+ 0.889
98
+ ],
99
+ "angle": 0,
100
+ "content": "Typically, a two-stage process has been followed to edit PDFs from natural language requests. In the first stage, a model is required to predict the edit location (green highlight in Figure 1) spatially in the PDF. The edit location is a PDF element (paragraph, line, list, table, etc.). The second stage"
101
+ },
102
+ {
103
+ "type": "image",
104
+ "bbox": [
105
+ 0.516,
106
+ 0.261,
107
+ 0.882,
108
+ 0.45
109
+ ],
110
+ "angle": 0,
111
+ "content": null
112
+ },
113
+ {
114
+ "type": "image_caption",
115
+ "bbox": [
116
+ 0.508,
117
+ 0.455,
118
+ 0.885,
119
+ 0.553
120
+ ],
121
+ "angle": 0,
122
+ "content": "Figure 1: Illustration of a typical PDF edit process. Models predict Edit Location and Edit Command, which are combined to effect the required edit. This work focuses on improving the bottleneck step - Edit Location. To this end, the proposed system, DELOC, generates large-scale (3M) synthetic data to instruction tune a Multimodal LLM."
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.508,
128
+ 0.575,
129
+ 0.884,
130
+ 0.72
131
+ ],
132
+ "angle": 0,
133
+ "content": "involves predicting the edit command (replace in Figure 1). The predicted bounding box and edit command can then be simply combined to effect the desired change. Existing PDF editing systems achieve high accuracy \\((>85\\%)\\) in predicting edit commands (Mathur et al., 2023) but struggle with bounding box precision \\((<50\\%)\\) (Suri et al., 2024). As such, we focus on improving the spatial PDF element localization for a given edit request."
134
+ },
135
+ {
136
+ "type": "text",
137
+ "bbox": [
138
+ 0.508,
139
+ 0.725,
140
+ 0.884,
141
+ 0.869
142
+ ],
143
+ "angle": 0,
144
+ "content": "State-of-the-art systems ground user queries in images (Wang et al., 2023; You et al., 2023), mobile UIs (You et al., 2024), and web UIs (Hong et al., 2023), but they do not transfer directly to PDFs. PDFs are more verbose, and edit requests follow a hierarchical structure requiring domain knowledge. For instance, a request like \"first list item of the third list in Section A.1\" demands an understanding of the composition: section \\(\\rightarrow\\) list \\(\\rightarrow\\) list item."
145
+ },
146
+ {
147
+ "type": "text",
148
+ "bbox": [
149
+ 0.508,
150
+ 0.874,
151
+ 0.884,
152
+ 0.922
153
+ ],
154
+ "angle": 0,
155
+ "content": "To address these challenges, we propose DELOC (Document Element LOCALizer), which adapts the strong grounding capabilities of Mul"
156
+ },
157
+ {
158
+ "type": "page_footnote",
159
+ "bbox": [
160
+ 0.114,
161
+ 0.895,
162
+ 0.489,
163
+ 0.922
164
+ ],
165
+ "angle": 0,
166
+ "content": "*Work done during an internship at Adobe Research, Document Intelligence Lab (DIL)"
167
+ },
168
+ {
169
+ "type": "page_number",
170
+ "bbox": [
171
+ 0.475,
172
+ 0.928,
173
+ 0.526,
174
+ 0.941
175
+ ],
176
+ "angle": 0,
177
+ "content": "31127"
178
+ },
179
+ {
180
+ "type": "footer",
181
+ "bbox": [
182
+ 0.153,
183
+ 0.946,
184
+ 0.845,
185
+ 0.973
186
+ ],
187
+ "angle": 0,
188
+ "content": "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 31127-31136 November 4-9, 2025 ©2025 Association for Computational Linguistics"
189
+ }
190
+ ],
191
+ [
192
+ {
193
+ "type": "image",
194
+ "bbox": [
195
+ 0.141,
196
+ 0.084,
197
+ 0.862,
198
+ 0.26
199
+ ],
200
+ "angle": 0,
201
+ "content": null
202
+ },
203
+ {
204
+ "type": "image_caption",
205
+ "bbox": [
206
+ 0.113,
207
+ 0.266,
208
+ 0.883,
209
+ 0.296
210
+ ],
211
+ "angle": 0,
212
+ "content": "Figure 2: Overview of DELOC. Document elements, their bounding box, and their hierarchical relations are first detected (DET) and formatted into a JSON structure (FJ). This JSON is fed to an LLM to generate synthetic data."
213
+ },
214
+ {
215
+ "type": "text",
216
+ "bbox": [
217
+ 0.113,
218
+ 0.311,
219
+ 0.489,
220
+ 0.407
221
+ ],
222
+ "angle": 0,
223
+ "content": "timodal Large Language Models (MLLMs) from natural images to PDFs. This adaptation involves instruction-tuning an MLLM specifically for PDFs. Our key contribution is the synthetic generation of high-quality instruction-tuning data, followed by an automated cleaning process called Code-NLI."
224
+ },
225
+ {
226
+ "type": "text",
227
+ "bbox": [
228
+ 0.113,
229
+ 0.413,
230
+ 0.49,
231
+ 0.686
232
+ ],
233
+ "angle": 0,
234
+ "content": "We first represent PDF elements (paragraphs, lines, lists, tables, titles, sections) in a spatially aware hierarchical JSON format (Figure 2), capturing each element's bounding box and hierarchical relationships (e.g., paragraph \\(\\rightarrow\\) lines, list \\(\\rightarrow\\) list items). This information is sourced from partially annotated PDF Document Analysis datasets (e.g., PubLayNet (Zhong et al., 2019)), using existing annotations and heuristically generating missing ones. Next, we input this structured PDF representation into a Large Language Model (LLM) to generate synthetic user requests for PDF editing, along with system responses specifying edit locations via bounding boxes. The hierarchical structure enables the LLM to produce compositional requests that resemble real user queries, while the bounding box data ensures precise edit location predictions."
235
+ },
236
+ {
237
+ "type": "text",
238
+ "bbox": [
239
+ 0.113,
240
+ 0.692,
241
+ 0.489,
242
+ 0.82
243
+ ],
244
+ "angle": 0,
245
+ "content": "We further clean noisy generated samples using a proposed process called Code-NLI (Code-based Natural Language Inference). Code-NLI (Figure 3) treats the generated user edit request and system response as the hypothesis, and the PDF as the premise. To verify the hypothesis, it leverages an LLM to generate Python code, which is then executed to filter out noisy samples."
246
+ },
247
+ {
248
+ "type": "text",
249
+ "bbox": [
250
+ 0.113,
251
+ 0.826,
252
+ 0.49,
253
+ 0.922
254
+ ],
255
+ "angle": 0,
256
+ "content": "To demonstrate the effectiveness of DELOC, we evaluate it on DocEdit (Mathur et al., 2023). Our results demonstrate that DELOC outperforms all existing zero-shot models on PDFs, including proprietary MLLM GPT-4o by \\(>2\\mathrm{x}\\). Our ablations demonstrate DELOC's performance improves with"
257
+ },
258
+ {
259
+ "type": "text",
260
+ "bbox": [
261
+ 0.508,
262
+ 0.311,
263
+ 0.885,
264
+ 0.343
265
+ ],
266
+ "angle": 0,
267
+ "content": "data scale and data cleaning via Code-NLI is crucial to its performance."
268
+ },
269
+ {
270
+ "type": "title",
271
+ "bbox": [
272
+ 0.509,
273
+ 0.356,
274
+ 0.666,
275
+ 0.371
276
+ ],
277
+ "angle": 0,
278
+ "content": "2 Related Work"
279
+ },
280
+ {
281
+ "type": "text",
282
+ "bbox": [
283
+ 0.507,
284
+ 0.383,
285
+ 0.885,
286
+ 0.543
287
+ ],
288
+ "angle": 0,
289
+ "content": "There has been a surge of interest in natural language based PDF edit localization. DocEdit (Mathur et al., 2023) trains a multimodal transformer that directly regresses the edit bounding box given the document as input. DocEdit-v2 (Suri et al., 2024) trains an encoder-decoder architecture that outputs a segmentation mask for the edit location. In training a model from scratch, these methods fail to utilize the excellent grounding capabilities of recent MLLMs."
290
+ },
291
+ {
292
+ "type": "text",
293
+ "bbox": [
294
+ 0.508,
295
+ 0.544,
296
+ 0.884,
297
+ 0.753
298
+ ],
299
+ "angle": 0,
300
+ "content": "A number of MLLMs (Chen et al., 2023; Zhang et al., 2023; Yuan et al., 2023; Lv et al., 2023; Lai et al., 2024; Ma et al., 2024) have been proposed that have shown impressive grounding abilities on natural images. Works like Ferret-UI (You et al., 2024), Cog-Agent (Hong et al., 2023), and MultiUI (Liu et al., 2024) have successfully adapted these MLLMs for mobile screens and web pages. We take inspiration from these works to adapt MLLMs for PDFs using large-scale synthetic data. In comparison, Agent-DocEdit (Wu et al., 2024) finetunes a grounding model, GLIP (Li* et al., 2022), only on a small training set."
301
+ },
302
+ {
303
+ "type": "title",
304
+ "bbox": [
305
+ 0.509,
306
+ 0.767,
307
+ 0.613,
308
+ 0.781
309
+ ],
310
+ "angle": 0,
311
+ "content": "3 Method"
312
+ },
313
+ {
314
+ "type": "text",
315
+ "bbox": [
316
+ 0.508,
317
+ 0.793,
318
+ 0.885,
319
+ 0.922
320
+ ],
321
+ "angle": 0,
322
+ "content": "Our method adapts the excellent query understanding and grounding capabilities of existing Multimodal Large Language Models (MLLMs) for grounding edit queries in PDFs. To this end, we: 1) generate diverse, compositional, and clean instruction-tuning data; and, 2) instruction-tune a MLLM on the generated data. We describe the synthetic data generation process below."
323
+ },
324
+ {
325
+ "type": "page_number",
326
+ "bbox": [
327
+ 0.476,
328
+ 0.928,
329
+ 0.526,
330
+ 0.941
331
+ ],
332
+ "angle": 0,
333
+ "content": "31128"
334
+ }
335
+ ],
336
+ [
337
+ {
338
+ "type": "image",
339
+ "bbox": [
340
+ 0.121,
341
+ 0.089,
342
+ 0.307,
343
+ 0.252
344
+ ],
345
+ "angle": 0,
346
+ "content": null
347
+ },
348
+ {
349
+ "type": "image",
350
+ "bbox": [
351
+ 0.312,
352
+ 0.088,
353
+ 0.47,
354
+ 0.24
355
+ ],
356
+ "angle": 0,
357
+ "content": null
358
+ },
359
+ {
360
+ "type": "image_caption",
361
+ "bbox": [
362
+ 0.354,
363
+ 0.243,
364
+ 0.435,
365
+ 0.253
366
+ ],
367
+ "angle": 0,
368
+ "content": "Generated Code"
369
+ },
370
+ {
371
+ "type": "image_caption",
372
+ "bbox": [
373
+ 0.114,
374
+ 0.258,
375
+ 0.486,
376
+ 0.302
377
+ ],
378
+ "angle": 0,
379
+ "content": "Figure 3: Code-NLI based data cleaning. Generated data is treated as hypothesis, which is validated by LLM generated Python code, given PDF premise."
380
+ },
381
+ {
382
+ "type": "text",
383
+ "bbox": [
384
+ 0.113,
385
+ 0.32,
386
+ 0.487,
387
+ 0.384
388
+ ],
389
+ "angle": 0,
390
+ "content": "Given a lack of large-scale instruction-tuning data for grounding edit queries in PDFs, we build an automatic synthetic data generation process. We discuss below the two stages of this process:"
391
+ },
392
+ {
393
+ "type": "text",
394
+ "bbox": [
395
+ 0.113,
396
+ 0.388,
397
+ 0.489,
398
+ 0.709
399
+ ],
400
+ "angle": 0,
401
+ "content": "LLM powered data generation. We present the PDF to an LLM and prompt it to generate edit requests along with the corresponding bounding box (see Figure 2). Prompt details in Appendix B. To ensure that the generated edit requests capture the compositional nature of real user queries, we structure PDFs in a hierarchical JSON format that explicitly defines the parent-child relationships of each element. Each element contains information about its type (e.g., paragraph, line, list), its bounding box within the PDF, and its subelements (e.g., a line is a subelement of a paragraph). Additionally, we organize elements sequentially within the same hierarchy based on their y-coordinates, enabling the LLM to generate edit requests that reference these elements numerically (e.g., \"second list,\" \"last line\"). Moreover, the inclusion of a bounding box for each element allows the LLM to generate precise edit location in the system response for the corresponding edit query."
402
+ },
403
+ {
404
+ "type": "text",
405
+ "bbox": [
406
+ 0.112,
407
+ 0.713,
408
+ 0.49,
409
+ 0.92
410
+ ],
411
+ "angle": 0,
412
+ "content": "To obtain this hierarchical PDF representation, we require annotations for each PDF element, including its type, bounding box, and hierarchical subelements. However, there does not exist a single dataset that contains all this information for all element types. As such, we use different datasets - PubLayNet and DocLayNet (Pfitzmann et al., 2022) - each containing a subset of these elements. Table 1 shows a non-exhaustive list of these elements and the corresponding dataset containing them. If an element's hierarchical subelements' annotations do not exist natively in the dataset, we use heuristics to create them. These heuristics are rela"
413
+ },
414
+ {
415
+ "type": "table",
416
+ "bbox": [
417
+ 0.514,
418
+ 0.083,
419
+ 0.88,
420
+ 0.156
421
+ ],
422
+ "angle": 0,
423
+ "content": "<table><tr><td></td><td>Title</td><td colspan=\"2\">Paragraph</td><td colspan=\"3\">List</td><td>Tables</td><td>Figures</td></tr><tr><td>Datasets</td><td></td><td></td><td>Lines</td><td></td><td colspan=\"2\">List-items</td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td>Lines</td><td></td><td></td></tr><tr><td>PubLayNet</td><td>✓</td><td>✓</td><td>♠</td><td>✓</td><td>♠</td><td>♠</td><td>✓</td><td>✓</td></tr><tr><td>DocLayNet</td><td>✓</td><td>✓</td><td>✓</td><td>♠</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>"
424
+ },
425
+ {
426
+ "type": "table_caption",
427
+ "bbox": [
428
+ 0.509,
429
+ 0.16,
430
+ 0.883,
431
+ 0.203
432
+ ],
433
+ "angle": 0,
434
+ "content": "Table 1: Datasets and their PDF elements - either natively (✓) present or heuristically created (♠). Multiple datasets combine to give extensive elements coverage."
435
+ },
436
+ {
437
+ "type": "text",
438
+ "bbox": [
439
+ 0.508,
440
+ 0.219,
441
+ 0.884,
442
+ 0.364
443
+ ],
444
+ "angle": 0,
445
+ "content": "tively simple - comparing y-coordinates of words to create lines within paragraphs, comparing x-coordinates of lines to create list-items within lists, and so on. The granularity scope goes as low as words. This means we have annotations for words, equations, formulae, footnotes, page numbers, etc. Essentially, we tried to cover as many PDF elements as possible with the available datasets and heuristics. More details in Appendix A."
446
+ },
447
+ {
448
+ "type": "text",
449
+ "bbox": [
450
+ 0.508,
451
+ 0.365,
452
+ 0.884,
453
+ 0.525
454
+ ],
455
+ "angle": 0,
456
+ "content": "Data cleaning with Code-NLI. Since the LLM generates data automatically, it can be noisy—for instance, an edit request might reference the second list item while the bounding box corresponds to the third. To filter out such inconsistencies, we draw inspiration from Natural Language Inference (NLI) (Bowman et al., 2015), which predicts whether a hypothesis aligns with a given premise. Here, the premise is the PDF, and the hypothesis is the generated user-system conversation."
457
+ },
458
+ {
459
+ "type": "text",
460
+ "bbox": [
461
+ 0.508,
462
+ 0.527,
463
+ 0.884,
464
+ 0.687
465
+ ],
466
+ "angle": 0,
467
+ "content": "To verify these conversations, we generate substatements that must hold true. For example, to confirm that a bounding box corresponds to the second list item, a counting statement must evaluate its position as 2 (Figure 3). These sub-statements are generated by an LLM as Pythonic function calls, which are then executed using our implemented APIs for verification. Pythonic statements make verification algorithmic and automatic. API details and prompts in Appendix C."
468
+ },
469
+ {
470
+ "type": "text",
471
+ "bbox": [
472
+ 0.508,
473
+ 0.688,
474
+ 0.882,
475
+ 0.752
476
+ ],
477
+ "angle": 0,
478
+ "content": "Once we have clean synthetic instruction-tuning data, we finetune a MLLM. The input is an image of the PDF and a user-edit request. The output is a bounding box localizing the edit element."
479
+ },
480
+ {
481
+ "type": "title",
482
+ "bbox": [
483
+ 0.509,
484
+ 0.766,
485
+ 0.655,
486
+ 0.783
487
+ ],
488
+ "angle": 0,
489
+ "content": "4 Experiments"
490
+ },
491
+ {
492
+ "type": "text",
493
+ "bbox": [
494
+ 0.508,
495
+ 0.793,
496
+ 0.884,
497
+ 0.922
498
+ ],
499
+ "angle": 0,
500
+ "content": "To balance quality and the cost of long context, we use LLaMA-3.1 70B (Grattafori and Team, 2024) for generating synthetic data. It is run on 40 Nvidia A100-80 GiB GPUs for 2 days. Next, GPT-4o (OpenAI, 2024) is used in Code-NLI cleaning as small context length allows us to optimize for quality. This results in \\(\\sim 3\\mathrm{M}\\) samples for instruction tuning. We keep \\(95\\%\\) samples for training and the rest"
501
+ },
502
+ {
503
+ "type": "page_number",
504
+ "bbox": [
505
+ 0.476,
506
+ 0.928,
507
+ 0.526,
508
+ 0.941
509
+ ],
510
+ "angle": 0,
511
+ "content": "31129"
512
+ }
513
+ ],
514
+ [
515
+ {
516
+ "type": "table",
517
+ "bbox": [
518
+ 0.119,
519
+ 0.083,
520
+ 0.486,
521
+ 0.251
522
+ ],
523
+ "angle": 0,
524
+ "content": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"3\">PDF</td><td colspan=\"3\">Design</td></tr><tr><td>A@0.5</td><td>A@0.30</td><td>A@0.25</td><td>A@0.5</td><td>A@0.30</td><td>A@0.25</td></tr><tr><td colspan=\"7\">Zero-Shot</td></tr><tr><td>GPT-4o</td><td>5.624</td><td>14.258</td><td>17.885</td><td>28.961</td><td>46.766</td><td>52.582</td></tr><tr><td>CogAgent</td><td>4.16</td><td>9.42</td><td>11.13</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Qwen2.5-VL</td><td>2.90</td><td>6.97</td><td>8.72</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Ferret-UI</td><td>0.0</td><td>0.22</td><td>0.28</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Phi-3-V</td><td>0.450</td><td>2.727</td><td>4.189</td><td>10.423</td><td>25.264</td><td>31.316</td></tr><tr><td>DELOC</td><td>14.703</td><td>30.391</td><td>35.198</td><td>32.421</td><td>51.489</td><td>57.445</td></tr><tr><td colspan=\"7\">Finetuned</td></tr><tr><td>DocEdit</td><td>36.500</td><td>-</td><td>-</td><td>34.340</td><td>-</td><td>-</td></tr><tr><td>DocEdit-v2</td><td>48.690</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Phi-3-V</td><td>26.567</td><td>48.524</td><td>54.371</td><td>34.523</td><td>57.183</td><td>62.272</td></tr><tr><td>DELOC</td><td>49.620</td><td>67.023</td><td>69.975</td><td>57.012</td><td>71.566</td><td>74.976</td></tr></table>"
525
+ },
526
+ {
527
+ "type": "table_caption",
528
+ "bbox": [
529
+ 0.113,
530
+ 0.254,
531
+ 0.49,
532
+ 0.326
533
+ ],
534
+ "angle": 0,
535
+ "content": "Table 2: Comparison of DELOC with state-of-the-art on DocEdit edit location (bbox) prediction. DELOC outperforms both existing Multimodal LLMs in zeroshot setting and specialized models in finetuned setting. \\( A@X \\) denotes Accuracy at IoU of \\( X \\)."
536
+ },
537
+ {
538
+ "type": "table",
539
+ "bbox": [
540
+ 0.192,
541
+ 0.336,
542
+ 0.412,
543
+ 0.384
544
+ ],
545
+ "angle": 0,
546
+ "content": "<table><tr><td>Model</td><td>A@0.5</td></tr><tr><td>DELOC</td><td>6.297</td></tr><tr><td>- w/o CODE-NLI Filtering</td><td>5.848</td></tr></table>"
547
+ },
548
+ {
549
+ "type": "table_caption",
550
+ "bbox": [
551
+ 0.114,
552
+ 0.388,
553
+ 0.49,
554
+ 0.417
555
+ ],
556
+ "angle": 0,
557
+ "content": "Table 3: Ablation of Code-NLI. The filtering step improves DELOC performance."
558
+ },
559
+ {
560
+ "type": "text",
561
+ "bbox": [
562
+ 0.114,
563
+ 0.434,
564
+ 0.489,
565
+ 0.451
566
+ ],
567
+ "angle": 0,
568
+ "content": "for validation. More data statistics in Appendix D."
569
+ },
570
+ {
571
+ "type": "text",
572
+ "bbox": [
573
+ 0.113,
574
+ 0.452,
575
+ 0.489,
576
+ 0.581
577
+ ],
578
+ "angle": 0,
579
+ "content": "For instruction-tuning Phi-3V (Abdin and Team, 2024) is used as it's small and thus easy to train. We use a learning rate of 5e-5, weight decay of 0.12, AdamW optimizer, and a linear scheduler with a warmup. It takes approximately 10 hours to train the model on 128 Nvidia A100-80 GiB GPUs. More architecture and training details are in Appendix E and Appendix F."
580
+ },
581
+ {
582
+ "type": "text",
583
+ "bbox": [
584
+ 0.113,
585
+ 0.582,
586
+ 0.49,
587
+ 0.87
588
+ ],
589
+ "angle": 0,
590
+ "content": "We evaluate our proposed approach on the test set of the DocEdit dataset (Mathur et al., 2023). The test is split into two subsets: PDFs, which are more verbose, and Designs, which are graphical. We use the originally proposed metric of calculating accuracy by evaluating if the Intersection Over Union (IoU) between predicted bbox and groundtruth bbox is greater than a threshold. For baselines, we select state-of-the-art MLLMs that understand text well from images. Consequently, we select GPT-4o, Phi3-V, CogAgent, Ferret-UI, Qwen2.5-VL (Bai et al., 2025) and Fuyu (Bavishi et al., 2023). We also consider as baselines specialized models - DocEdit v1 and v2 - that are finetuned on DocEdit. We compare DELOC against these baselines in two settings: zero-shot - where DELOC is not finetuned on the DocEdit train set, and finetuned - where it is."
591
+ },
592
+ {
593
+ "type": "text",
594
+ "bbox": [
595
+ 0.114,
596
+ 0.874,
597
+ 0.49,
598
+ 0.922
599
+ ],
600
+ "angle": 0,
601
+ "content": "Our main results are summarized in Table 2. We make the following observations: 1) DELOC outperforms both open-source and proprietary closed"
602
+ },
603
+ {
604
+ "type": "image",
605
+ "bbox": [
606
+ 0.565,
607
+ 0.088,
608
+ 0.833,
609
+ 0.208
610
+ ],
611
+ "angle": 0,
612
+ "content": null
613
+ },
614
+ {
615
+ "type": "image_caption",
616
+ "bbox": [
617
+ 0.509,
618
+ 0.213,
619
+ 0.882,
620
+ 0.241
621
+ ],
622
+ "angle": 0,
623
+ "content": "Figure 4: Datasize Ablation: DELOC's performance improves with data."
624
+ },
625
+ {
626
+ "type": "image",
627
+ "bbox": [
628
+ 0.523,
629
+ 0.258,
630
+ 0.879,
631
+ 0.459
632
+ ],
633
+ "angle": 0,
634
+ "content": null
635
+ },
636
+ {
637
+ "type": "image_caption",
638
+ "bbox": [
639
+ 0.509,
640
+ 0.465,
641
+ 0.882,
642
+ 0.48
643
+ ],
644
+ "angle": 0,
645
+ "content": "Figure 5: Qualitative sample prediction from DELOC."
646
+ },
647
+ {
648
+ "type": "text",
649
+ "bbox": [
650
+ 0.508,
651
+ 0.499,
652
+ 0.884,
653
+ 0.787
654
+ ],
655
+ "angle": 0,
656
+ "content": "source MLLM by \\(\\sim 3\\mathrm{x}\\) on the PDF subset and \\(\\sim 12\\%\\) on the Design subset. 2) DELOC outperforms all specialized finetuned models on both subsets. 3) DELOC improves over its backbone MLLM, Phi-3-V by \\(28\\mathrm{x}\\) on PDFs and \\(3\\mathrm{x}\\) on Designs, underscoring the significance of our synthetic instruction tuning. 4) The best existing zero-shot MLLM performance is at best \\(5\\%\\), indicating grounding in PDF is challenging. 5) All MLLM perform much better on the Design subset. This is understandable given all MLLM have been pretrained on natural images, which are closer in domain to Designs than to verbose PDFs. Notably, we found Fuyu to be quite bad; hence we do not include it in Table 2. We also don't compare against Agent-DocEdit as it includes an additional post-processing step that assumes availability of ground-truthbbox for all elements in a PDF."
657
+ },
658
+ {
659
+ "type": "text",
660
+ "bbox": [
661
+ 0.508,
662
+ 0.793,
663
+ 0.884,
664
+ 0.92
665
+ ],
666
+ "angle": 0,
667
+ "content": "Our ablation on Code-NLI in Table 3 indicates the importance of data cleaning to our approach. We also manually evaluate the quality of Code-NLI on 25 samples ourselves. Our findings are: 1) Precision is \\(84\\%\\): This implies that most of the data that Code-NLI predicts as correct is actually correct; only \\(16\\%\\) incorrect data passes through the Code-NLI filter. 2) Recall is \\(63.64\\%\\): This"
668
+ },
669
+ {
670
+ "type": "page_number",
671
+ "bbox": [
672
+ 0.476,
673
+ 0.928,
674
+ 0.526,
675
+ 0.941
676
+ ],
677
+ "angle": 0,
678
+ "content": "31130"
679
+ }
680
+ ],
681
+ [
682
+ {
683
+ "type": "text",
684
+ "bbox": [
685
+ 0.113,
686
+ 0.085,
687
+ 0.489,
688
+ 0.181
689
+ ],
690
+ "angle": 0,
691
+ "content": "implies it rejects a decent amount of good data \\((36\\%)\\). This loss is acceptable for our use case since we generate a very large amount of synthetic data. The important consideration for us is that incorrect data should not be flagged as correct – which Code-NLI does (84% precision)."
692
+ },
693
+ {
694
+ "type": "text",
695
+ "bbox": [
696
+ 0.113,
697
+ 0.182,
698
+ 0.49,
699
+ 0.295
700
+ ],
701
+ "angle": 0,
702
+ "content": "Figure 4 demonstrates that performance of DELOC improves with data size. We also provide a qualitative example in Figure 5 that shows 1) DELOC acquires effective capability of verbose PDF grounding. 2) Ambiguity in the expected response can lead to divergence from ground-truth. More examples in Appendix G."
703
+ },
704
+ {
705
+ "type": "title",
706
+ "bbox": [
707
+ 0.114,
708
+ 0.305,
709
+ 0.248,
710
+ 0.32
711
+ ],
712
+ "angle": 0,
713
+ "content": "5 Conclusion"
714
+ },
715
+ {
716
+ "type": "text",
717
+ "bbox": [
718
+ 0.113,
719
+ 0.329,
720
+ 0.49,
721
+ 0.457
722
+ ],
723
+ "angle": 0,
724
+ "content": "In this work, we propose DELOC, a system to ground PDF edit request spatially. DELOC leverages spatial and hierarchical structure of PDFs to generate 3M synthetic instruction tuning data that successfully adapts existing MLLM for PDF grounding. Our results demonstrate that DELOC outperforms both existing MLLMs, including GPT-40, and specialized models on DocEdit."
725
+ },
726
+ {
727
+ "type": "title",
728
+ "bbox": [
729
+ 0.114,
730
+ 0.469,
731
+ 0.221,
732
+ 0.483
733
+ ],
734
+ "angle": 0,
735
+ "content": "Limitations"
736
+ },
737
+ {
738
+ "type": "text",
739
+ "bbox": [
740
+ 0.113,
741
+ 0.493,
742
+ 0.49,
743
+ 0.798
744
+ ],
745
+ "angle": 0,
746
+ "content": "We attempted to cover a comprehensive list of PDF elements in our synthetically generated data. This coverage is mostly dependent on the element annotation present in PDF analysis datasets we began with to create our hierarchical representation. As such, there could be some PDF elements which are not covered. We leave for future work the addition of more PDF analysis datasets to increase element coverage. Furthermore, this method (like most grounding MLLM approaches) generates bounding boxes using auto-regressive prediction. The loss used assigns equal weight to all predictions not exactly same as ground-truthbbox, irrespective of their proximity to the ground-truthbbox. Ideally, the loss should be higher for a predictedbbox that is farther from the ground-truth than a predicted box that is closer to ground-truth. Reinforcement learning based preference optimization could be a way to address this issue in future works."
747
+ },
748
+ {
749
+ "type": "title",
750
+ "bbox": [
751
+ 0.116,
752
+ 0.825,
753
+ 0.214,
754
+ 0.839
755
+ ],
756
+ "angle": 0,
757
+ "content": "References"
758
+ },
759
+ {
760
+ "type": "ref_text",
761
+ "bbox": [
762
+ 0.116,
763
+ 0.846,
764
+ 0.489,
765
+ 0.887
766
+ ],
767
+ "angle": 0,
768
+ "content": "Marah Abdin and The Phi-3 Team. 2024. Phi-3 technical report: A highly capable language model locally on your phone. Preprint, arXiv:2404.14219."
769
+ },
770
+ {
771
+ "type": "ref_text",
772
+ "bbox": [
773
+ 0.117,
774
+ 0.894,
775
+ 0.49,
776
+ 0.922
777
+ ],
778
+ "angle": 0,
779
+ "content": "Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang,"
780
+ },
781
+ {
782
+ "type": "list",
783
+ "bbox": [
784
+ 0.116,
785
+ 0.846,
786
+ 0.49,
787
+ 0.922
788
+ ],
789
+ "angle": 0,
790
+ "content": null
791
+ },
792
+ {
793
+ "type": "ref_text",
794
+ "bbox": [
795
+ 0.529,
796
+ 0.086,
797
+ 0.883,
798
+ 0.177
799
+ ],
800
+ "angle": 0,
801
+ "content": "Shijie Wang, Jun Tang, Humen Zhong, Yanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. 2025. Qwen2.5-vl technical report. Preprint, arXiv:2502.13923."
802
+ },
803
+ {
804
+ "type": "ref_text",
805
+ "bbox": [
806
+ 0.512,
807
+ 0.188,
808
+ 0.883,
809
+ 0.24
810
+ ],
811
+ "angle": 0,
812
+ "content": "Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Săgnak Taşürlar. 2023. Introducing our multimodal models."
813
+ },
814
+ {
815
+ "type": "ref_text",
816
+ "bbox": [
817
+ 0.512,
818
+ 0.25,
819
+ 0.885,
820
+ 0.342
821
+ ],
822
+ "angle": 0,
823
+ "content": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics."
824
+ },
825
+ {
826
+ "type": "ref_text",
827
+ "bbox": [
828
+ 0.512,
829
+ 0.352,
830
+ 0.883,
831
+ 0.405
832
+ ],
833
+ "angle": 0,
834
+ "content": "Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. 2023. Shikra: Unleashing multimodal llm's referential dialogue magic. arXiv preprint arXiv:2306.15195."
835
+ },
836
+ {
837
+ "type": "ref_text",
838
+ "bbox": [
839
+ 0.512,
840
+ 0.414,
841
+ 0.883,
842
+ 0.544
843
+ ],
844
+ "angle": 0,
845
+ "content": "Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Songyang Zhang, Haodong Duan, Wenwei Zhang, Yining Li, Hang Yan, Yang Gao, Zhe Chen, Xinyue Zhang, Wei Li, Jingwen Li, Wenhai Wang, Kai Chen, Conghui He, Xingcheng Zhang, Jifeng Dai, Yu Qiao, Dahua Lin, and Jiaqi Wang. 2024. Internlm-xcomposer2-4khd: A pioneering large vision-language model handling resolutions from 336 pixels to 4k hd. Preprint, arXiv:2404.06512."
846
+ },
847
+ {
848
+ "type": "ref_text",
849
+ "bbox": [
850
+ 0.512,
851
+ 0.555,
852
+ 0.883,
853
+ 0.582
854
+ ],
855
+ "angle": 0,
856
+ "content": "Aaron Grattafori and The LLaMA Team. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783."
857
+ },
858
+ {
859
+ "type": "ref_text",
860
+ "bbox": [
861
+ 0.512,
862
+ 0.592,
863
+ 0.883,
864
+ 0.657
865
+ ],
866
+ "angle": 0,
867
+ "content": "Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan Wang, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Co-gagent: A visual language model for gui agents. Preprint, arXiv:2312.08914."
868
+ },
869
+ {
870
+ "type": "ref_text",
871
+ "bbox": [
872
+ 0.512,
873
+ 0.667,
874
+ 0.883,
875
+ 0.72
876
+ ],
877
+ "angle": 0,
878
+ "content": "Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. 2024. Lisa: Reasoning segmentation via large language model. Preprint, arXiv:2308.00692."
879
+ },
880
+ {
881
+ "type": "ref_text",
882
+ "bbox": [
883
+ 0.512,
884
+ 0.73,
885
+ 0.883,
886
+ 0.796
887
+ ],
888
+ "angle": 0,
889
+ "content": "Lianian Harold Li*, Pengchuan Zhang*, Haotian Zhang*, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. 2022. Grounded language-image pre-training. In CVPR."
890
+ },
891
+ {
892
+ "type": "ref_text",
893
+ "bbox": [
894
+ 0.512,
895
+ 0.806,
896
+ 0.883,
897
+ 0.87
898
+ ],
899
+ "angle": 0,
900
+ "content": "Junpeng Liu, Tianyue Ou, Yifan Song, Yuxiao Qu, Wai Lam, Chenyan Xiong, Wenhu Chen, Graham Neubig, and Xiang Yue. 2024. Harnessing webpage uis for text-rich visual understanding. Preprint, arXiv:2410.13824."
901
+ },
902
+ {
903
+ "type": "ref_text",
904
+ "bbox": [
905
+ 0.512,
906
+ 0.881,
907
+ 0.883,
908
+ 0.922
909
+ ],
910
+ "angle": 0,
911
+ "content": "Tengchao Lv, Yupan Huang, Jingye Chen, Lei Cui, Shuming Ma, Yaoyao Chang, Shaohan Huang, Wenhui Wang, Li Dong, Weiyao Luo, et al. 2023."
912
+ },
913
+ {
914
+ "type": "list",
915
+ "bbox": [
916
+ 0.512,
917
+ 0.086,
918
+ 0.885,
919
+ 0.922
920
+ ],
921
+ "angle": 0,
922
+ "content": null
923
+ },
924
+ {
925
+ "type": "page_number",
926
+ "bbox": [
927
+ 0.476,
928
+ 0.928,
929
+ 0.524,
930
+ 0.941
931
+ ],
932
+ "angle": 0,
933
+ "content": "31131"
934
+ }
935
+ ],
936
+ [
937
+ {
938
+ "type": "ref_text",
939
+ "bbox": [
940
+ 0.135,
941
+ 0.086,
942
+ 0.487,
943
+ 0.113
944
+ ],
945
+ "angle": 0,
946
+ "content": "Kosmos-2.5: A multimodal literate model. arXiv preprint arXiv:2309.11419."
947
+ },
948
+ {
949
+ "type": "ref_text",
950
+ "bbox": [
951
+ 0.117,
952
+ 0.127,
953
+ 0.487,
954
+ 0.179
955
+ ],
956
+ "angle": 0,
957
+ "content": "Chuofan Ma, Yi Jiang, Jiannan Wu, Zehuan Yuan, and Xiaojuan Qi. 2024. Groma: Localized visual tokenization for grounding multimodal large language models. arXiv preprint arXiv:2404.13013."
958
+ },
959
+ {
960
+ "type": "ref_text",
961
+ "bbox": [
962
+ 0.117,
963
+ 0.193,
964
+ 0.487,
965
+ 0.259
966
+ ],
967
+ "angle": 0,
968
+ "content": "Puneet Mathur, Rajiv Jain, Jiumiang Gu, Franck Dernoncourt, Dinesh Manocha, and Vlad Morariu. 2023. Docedit: Language-guided document editing. In Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI)."
969
+ },
970
+ {
971
+ "type": "ref_text",
972
+ "bbox": [
973
+ 0.117,
974
+ 0.273,
975
+ 0.487,
976
+ 0.299
977
+ ],
978
+ "angle": 0,
979
+ "content": "OpenAI. 2024. Gpt-4o system card. Preprint, arXiv:2410.21276."
980
+ },
981
+ {
982
+ "type": "ref_text",
983
+ "bbox": [
984
+ 0.117,
985
+ 0.313,
986
+ 0.487,
987
+ 0.392
988
+ ],
989
+ "angle": 0,
990
+ "content": "Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar. 2022. Doclaynet: A large human-annotated dataset for documentlayout segmentation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, page 3743-3751. ACM."
991
+ },
992
+ {
993
+ "type": "ref_text",
994
+ "bbox": [
995
+ 0.117,
996
+ 0.405,
997
+ 0.487,
998
+ 0.484
999
+ ],
1000
+ "angle": 0,
1001
+ "content": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. Preprint, arXiv:2103.00020."
1002
+ },
1003
+ {
1004
+ "type": "ref_text",
1005
+ "bbox": [
1006
+ 0.117,
1007
+ 0.497,
1008
+ 0.487,
1009
+ 0.604
1010
+ ],
1011
+ "angle": 0,
1012
+ "content": "Manan Suri, Puneet Mathur, Franck Dernoncourt, Rajiv Jain, Vlad I Morariu, Ramit Sawhney, Preslav Nakov, and Dinesh Manocha. 2024. DocEdit-v2: Document structure editing via multimodal LLM grounding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15485-15505, Miami, Florida, USA. Association for Computational Linguistics."
1013
+ },
1014
+ {
1015
+ "type": "ref_text",
1016
+ "bbox": [
1017
+ 0.117,
1018
+ 0.616,
1019
+ 0.487,
1020
+ 0.695
1021
+ ],
1022
+ "angle": 0,
1023
+ "content": "Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Cogvlm: Visual expert for pretrained language models. Preprint, arXiv:2311.03079."
1024
+ },
1025
+ {
1026
+ "type": "ref_text",
1027
+ "bbox": [
1028
+ 0.117,
1029
+ 0.709,
1030
+ 0.487,
1031
+ 0.762
1032
+ ],
1033
+ "angle": 0,
1034
+ "content": "Te-Lin Wu, Rajiv Jain, Yufan Zhou, Puneet Mathur, and Vlad I Morariu. 2024. Agent-docedit: Language-instructed LLM agent for content-rich document editing. In First Conference on Language Modeling."
1035
+ },
1036
+ {
1037
+ "type": "ref_text",
1038
+ "bbox": [
1039
+ 0.117,
1040
+ 0.775,
1041
+ 0.487,
1042
+ 0.841
1043
+ ],
1044
+ "angle": 0,
1045
+ "content": "Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, and Yinfei Yang. 2023. Ferret: Refer and ground anything anywhere at any granularity. Preprint, arXiv:2310.07704."
1046
+ },
1047
+ {
1048
+ "type": "ref_text",
1049
+ "bbox": [
1050
+ 0.117,
1051
+ 0.855,
1052
+ 0.487,
1053
+ 0.919
1054
+ ],
1055
+ "angle": 0,
1056
+ "content": "Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, and Zhe Gan. 2024. Ferret-ui: Grounded mobile ui understanding with multimodal llms. Preprint, arXiv:2404.05719."
1057
+ },
1058
+ {
1059
+ "type": "list",
1060
+ "bbox": [
1061
+ 0.117,
1062
+ 0.086,
1063
+ 0.487,
1064
+ 0.919
1065
+ ],
1066
+ "angle": 0,
1067
+ "content": null
1068
+ },
1069
+ {
1070
+ "type": "ref_text",
1071
+ "bbox": [
1072
+ 0.513,
1073
+ 0.086,
1074
+ 0.882,
1075
+ 0.139
1076
+ ],
1077
+ "angle": 0,
1078
+ "content": "Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang, and Jianke Zhu. 2023. Osprey: Pixel understanding with visual instruction tuning. Preprint, arXiv:2312.10032."
1079
+ },
1080
+ {
1081
+ "type": "ref_text",
1082
+ "bbox": [
1083
+ 0.513,
1084
+ 0.149,
1085
+ 0.882,
1086
+ 0.189
1087
+ ],
1088
+ "angle": 0,
1089
+ "content": "Ao Zhang, Yuan Yao, Wei Ji, Zhiyuan Liu, and Tat-Seng Chua. 2023. Next-chat: An Imm for chat, detection and segmentation. Preprint, arXiv:2311.04498."
1090
+ },
1091
+ {
1092
+ "type": "ref_text",
1093
+ "bbox": [
1094
+ 0.513,
1095
+ 0.199,
1096
+ 0.882,
1097
+ 0.239
1098
+ ],
1099
+ "angle": 0,
1100
+ "content": "Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. Preprint, arXiv:1908.07836."
1101
+ },
1102
+ {
1103
+ "type": "list",
1104
+ "bbox": [
1105
+ 0.513,
1106
+ 0.086,
1107
+ 0.882,
1108
+ 0.239
1109
+ ],
1110
+ "angle": 0,
1111
+ "content": null
1112
+ },
1113
+ {
1114
+ "type": "page_number",
1115
+ "bbox": [
1116
+ 0.477,
1117
+ 0.929,
1118
+ 0.525,
1119
+ 0.941
1120
+ ],
1121
+ "angle": 0,
1122
+ "content": "31132"
1123
+ }
1124
+ ],
1125
+ [
1126
+ {
1127
+ "type": "text",
1128
+ "bbox": [
1129
+ 0.114,
1130
+ 0.085,
1131
+ 0.489,
1132
+ 0.116
1133
+ ],
1134
+ "angle": 0,
1135
+ "content": "We provide additional details here for further clarification."
1136
+ },
1137
+ {
1138
+ "type": "text",
1139
+ "bbox": [
1140
+ 0.136,
1141
+ 0.129,
1142
+ 0.473,
1143
+ 0.145
1144
+ ],
1145
+ "angle": 0,
1146
+ "content": "- Element Creation Heuristics (Appendix A)"
1147
+ },
1148
+ {
1149
+ "type": "text",
1150
+ "bbox": [
1151
+ 0.136,
1152
+ 0.155,
1153
+ 0.489,
1154
+ 0.187
1155
+ ],
1156
+ "angle": 0,
1157
+ "content": "- Synthetic Data Generation Prompt (Appendix B)"
1158
+ },
1159
+ {
1160
+ "type": "text",
1161
+ "bbox": [
1162
+ 0.136,
1163
+ 0.197,
1164
+ 0.489,
1165
+ 0.214
1166
+ ],
1167
+ "angle": 0,
1168
+ "content": "Code-NLI Prompt and Samples (Appendix C)"
1169
+ },
1170
+ {
1171
+ "type": "text",
1172
+ "bbox": [
1173
+ 0.136,
1174
+ 0.224,
1175
+ 0.37,
1176
+ 0.24
1177
+ ],
1178
+ "angle": 0,
1179
+ "content": "Data Statistics (Appendix D)"
1180
+ },
1181
+ {
1182
+ "type": "text",
1183
+ "bbox": [
1184
+ 0.136,
1185
+ 0.25,
1186
+ 0.406,
1187
+ 0.266
1188
+ ],
1189
+ "angle": 0,
1190
+ "content": "- Model Architecture (Appendix E)"
1191
+ },
1192
+ {
1193
+ "type": "text",
1194
+ "bbox": [
1195
+ 0.136,
1196
+ 0.277,
1197
+ 0.463,
1198
+ 0.293
1199
+ ],
1200
+ "angle": 0,
1201
+ "content": "Additional Training Details (Appendix F)"
1202
+ },
1203
+ {
1204
+ "type": "text",
1205
+ "bbox": [
1206
+ 0.136,
1207
+ 0.303,
1208
+ 0.49,
1209
+ 0.336
1210
+ ],
1211
+ "angle": 0,
1212
+ "content": "Additional Qualitative Examples (Appendix G)"
1213
+ },
1214
+ {
1215
+ "type": "list",
1216
+ "bbox": [
1217
+ 0.136,
1218
+ 0.129,
1219
+ 0.49,
1220
+ 0.336
1221
+ ],
1222
+ "angle": 0,
1223
+ "content": null
1224
+ },
1225
+ {
1226
+ "type": "image",
1227
+ "bbox": [
1228
+ 0.123,
1229
+ 0.361,
1230
+ 0.481,
1231
+ 0.706
1232
+ ],
1233
+ "angle": 0,
1234
+ "content": null
1235
+ },
1236
+ {
1237
+ "type": "image_caption",
1238
+ "bbox": [
1239
+ 0.114,
1240
+ 0.712,
1241
+ 0.487,
1242
+ 0.74
1243
+ ],
1244
+ "angle": 0,
1245
+ "content": "Figure 6: Sample output from using our heuristic to create lines and list-items."
1246
+ },
1247
+ {
1248
+ "type": "title",
1249
+ "bbox": [
1250
+ 0.115,
1251
+ 0.768,
1252
+ 0.4,
1253
+ 0.784
1254
+ ],
1255
+ "angle": 0,
1256
+ "content": "A Element Creation Heuristics"
1257
+ },
1258
+ {
1259
+ "type": "text",
1260
+ "bbox": [
1261
+ 0.113,
1262
+ 0.793,
1263
+ 0.49,
1264
+ 0.922
1265
+ ],
1266
+ "angle": 0,
1267
+ "content": "We require annotations for elements in PDFs to create the spatially-aware hierarchical representation. While an element annotation may be present in a dataset, its subelement or parent element may not have the requisite annotation available. We use heuristics to create annotations for such elements. To create lines within a paragraph, we take the bounding box of every word in the paragraph. All"
1268
+ },
1269
+ {
1270
+ "type": "text",
1271
+ "bbox": [
1272
+ 0.508,
1273
+ 0.085,
1274
+ 0.883,
1275
+ 0.214
1276
+ ],
1277
+ "angle": 0,
1278
+ "content": "the words with same y-coordinates are considered to be in the same line. Similarly, to create lists from lines, we compare the x-coordinates of the beginning of the lines. Outliers are considered to be the beginning of a new list item. We illustrate the output from such heuristics to create lines and list-items in Figure 6. While these are simple, they work quite well."
1279
+ },
1280
+ {
1281
+ "type": "title",
1282
+ "bbox": [
1283
+ 0.509,
1284
+ 0.226,
1285
+ 0.85,
1286
+ 0.243
1287
+ ],
1288
+ "angle": 0,
1289
+ "content": "B Synthetic Data Generation Prompt"
1290
+ },
1291
+ {
1292
+ "type": "text",
1293
+ "bbox": [
1294
+ 0.508,
1295
+ 0.251,
1296
+ 0.885,
1297
+ 0.395
1298
+ ],
1299
+ "angle": 0,
1300
+ "content": "Once we have a hierarchical representation of the PDF elements along with their corresponding bounding box annotations, we feed them into an LLM to generate the instruction tuning data. Apart from the PDF elements, we input a system prompt containing the guidelines for generating an instruction tuning set and a one-shot example of how the user system conversation should be formatted. The process is illustrated in Figure 7."
1301
+ },
1302
+ {
1303
+ "type": "text",
1304
+ "bbox": [
1305
+ 0.508,
1306
+ 0.397,
1307
+ 0.885,
1308
+ 0.539
1309
+ ],
1310
+ "angle": 0,
1311
+ "content": "The system prompt contains guidelines describing the task, input, and expected output. It contains instructions to generate diverse and compositional edit requests, to only utilize the information in the given hierarchical PDF representation, to not hallucinate, and so on. In addition, we provide a one-shot example to the LLM to further illustrate our requirements and guide it to follow expected format."
1312
+ },
1313
+ {
1314
+ "type": "text",
1315
+ "bbox": [
1316
+ 0.529,
1317
+ 0.542,
1318
+ 0.812,
1319
+ 0.557
1320
+ ],
1321
+ "angle": 0,
1322
+ "content": "We provide the system prompt below:"
1323
+ },
1324
+ {
1325
+ "type": "text",
1326
+ "bbox": [
1327
+ 0.511,
1328
+ 0.568,
1329
+ 0.882,
1330
+ 0.761
1331
+ ],
1332
+ "angle": 0,
1333
+ "content": "You are an AI visual assistance that can analyze PDFs. You will receive information describing a pdf, where each pdf element (text, paragraph, header, footer, chart etc.) detection is represented in a json format. Each element is denoted by its type, followed by its bounding box coordinates, followed by its value and/or its subelements. Bounding box coordinates are represented as \\((x1, y1, x2, y2)\\). These values correspond to the top left \\(x\\), top left \\(y\\), bottom right \\(x\\), and bottom right \\(y\\). The subelements are formatted and indented with spaces. The content of a pdf element is found at the 'value' key. The pdf elements are organized in the order in which they are present in the pdf: top to bottom, left to right. The pdfs can be single column, double column or multiple columns. Judge the number of columns in the pdfs by looking at the relative positioning of the pdf elements' \\(x1\\)."
1334
+ },
1335
+ {
1336
+ "type": "text",
1337
+ "bbox": [
1338
+ 0.512,
1339
+ 0.768,
1340
+ 0.879,
1341
+ 0.837
1342
+ ],
1343
+ "angle": 0,
1344
+ "content": "Using the provided texts and coordinates, design question and answer pairs simulating the interactions between a user and system. Conversations should focus on potential user edit requests on the pdf elements (vs perception). Please follow the guidelines below while generating the conversations:"
1345
+ },
1346
+ {
1347
+ "type": "text",
1348
+ "bbox": [
1349
+ 0.512,
1350
+ 0.843,
1351
+ 0.878,
1352
+ 0.883
1353
+ ],
1354
+ "angle": 0,
1355
+ "content": "1. The edit requests should be diverse, requiring different kind of editing like adding, deleting, changing, modifying, swapping, moving, replacing, merging, splitting, and so on."
1356
+ },
1357
+ {
1358
+ "type": "text",
1359
+ "bbox": [
1360
+ 0.512,
1361
+ 0.891,
1362
+ 0.844,
1363
+ 0.92
1364
+ ],
1365
+ "angle": 0,
1366
+ "content": "2. Please refer to the pdf-element that needs editing in diverse compositional ways. For example, \\(\\backslash\\) delete last line of the third"
1367
+ },
1368
+ {
1369
+ "type": "list",
1370
+ "bbox": [
1371
+ 0.512,
1372
+ 0.843,
1373
+ 0.878,
1374
+ 0.92
1375
+ ],
1376
+ "angle": 0,
1377
+ "content": null
1378
+ },
1379
+ {
1380
+ "type": "page_number",
1381
+ "bbox": [
1382
+ 0.476,
1383
+ 0.928,
1384
+ 0.526,
1385
+ 0.941
1386
+ ],
1387
+ "angle": 0,
1388
+ "content": "31133"
1389
+ }
1390
+ ],
1391
+ [
1392
+ {
1393
+ "type": "title",
1394
+ "bbox": [
1395
+ 0.145,
1396
+ 0.101,
1397
+ 0.212,
1398
+ 0.109
1399
+ ],
1400
+ "angle": 0,
1401
+ "content": "System Prompt"
1402
+ },
1403
+ {
1404
+ "type": "text",
1405
+ "bbox": [
1406
+ 0.145,
1407
+ 0.11,
1408
+ 0.384,
1409
+ 0.237
1410
+ ],
1411
+ "angle": 0,
1412
+ "content": "You are an AI visual assistance that can analyze PDFs. You will receive information describing a pdf, where each pdf element (text, paragraph, header, footer, chart etc.) detection is represented in a json format. Each element is denoted by its type, followed by its bounding box coordinates, followed by its value and/or its subelements. Bounding box coordinates are represented as (x1, y1, x2, y2). These values correspond to the top left \\( x \\), top left \\( y \\), bottom right \\( x \\), and bottom right \\( y \\). The subelements are formatted and indented with spaces. The content of a pdf element is found at the 'value' key. The pdf elements are organized in the order in which they are present in the pdf: top to bottom, left to right. The pdfs can be single column, double column or multiple columns. Judge the number of columns in the pdfs by looking at the relative positioning of the pdf elements' \\( x_1 \\). Using the provided texts and coordinates, design question and answer pairs simulating the interactions between a user and system. Conversations should focus on potential user edit requests on the pdf elements (vs perception)."
1413
+ },
1414
+ {
1415
+ "type": "text",
1416
+ "bbox": [
1417
+ 0.146,
1418
+ 0.242,
1419
+ 0.353,
1420
+ 0.256
1421
+ ],
1422
+ "angle": 0,
1423
+ "content": "Please follow the guidelines below while generating the conversations:"
1424
+ },
1425
+ {
1426
+ "type": "text",
1427
+ "bbox": [
1428
+ 0.145,
1429
+ 0.257,
1430
+ 0.383,
1431
+ 0.307
1432
+ ],
1433
+ "angle": 0,
1434
+ "content": "1. The edit requests should be diverse, requiring different kind of editing like adding, deleting, changing, modifying, swapping, moving, replacing, merging, splitting, and so on. \n2. Please refer to the pdf-element that needs editing in diverse compositional ways. For example, \"delete last line of the third paragraph\" or \"move the table in between the two figures from top to bottom\"."
1435
+ },
1436
+ {
1437
+ "type": "title",
1438
+ "bbox": [
1439
+ 0.394,
1440
+ 0.088,
1441
+ 0.469,
1442
+ 0.096
1443
+ ],
1444
+ "angle": 0,
1445
+ "content": "One Shot Example"
1446
+ },
1447
+ {
1448
+ "type": "text",
1449
+ "bbox": [
1450
+ 0.398,
1451
+ 0.097,
1452
+ 0.421,
1453
+ 0.103
1454
+ ],
1455
+ "angle": 0,
1456
+ "content": "<user>"
1457
+ },
1458
+ {
1459
+ "type": "text",
1460
+ "bbox": [
1461
+ 0.398,
1462
+ 0.104,
1463
+ 0.448,
1464
+ 0.109
1465
+ ],
1466
+ "angle": 0,
1467
+ "content": "element':text"
1468
+ },
1469
+ {
1470
+ "type": "text",
1471
+ "bbox": [
1472
+ 0.398,
1473
+ 0.11,
1474
+ 0.484,
1475
+ 0.116
1476
+ ],
1477
+ "angle": 0,
1478
+ "content": "'bbox': [84, 382, 484, 413],"
1479
+ },
1480
+ {
1481
+ "type": "text",
1482
+ "bbox": [
1483
+ 0.398,
1484
+ 0.117,
1485
+ 0.519,
1486
+ 0.123
1487
+ ],
1488
+ "angle": 0,
1489
+ "content": "subelements: [element: line,"
1490
+ },
1491
+ {
1492
+ "type": "text",
1493
+ "bbox": [
1494
+ 0.432,
1495
+ 0.124,
1496
+ 0.617,
1497
+ 0.13
1498
+ ],
1499
+ "angle": 0,
1500
+ "content": "'value' :interactions were then functionally annotated for"
1501
+ },
1502
+ {
1503
+ "type": "text",
1504
+ "bbox": [
1505
+ 0.432,
1506
+ 0.131,
1507
+ 0.519,
1508
+ 0.136
1509
+ ],
1510
+ "angle": 0,
1511
+ "content": "value : Interactions were the"
1512
+ },
1513
+ {
1514
+ "type": "list",
1515
+ "bbox": [
1516
+ 0.432,
1517
+ 0.124,
1518
+ 0.617,
1519
+ 0.136
1520
+ ],
1521
+ "angle": 0,
1522
+ "content": null
1523
+ },
1524
+ {
1525
+ "type": "text",
1526
+ "bbox": [
1527
+ 0.43,
1528
+ 0.138,
1529
+ 0.482,
1530
+ 0.144
1531
+ ],
1532
+ "angle": 0,
1533
+ "content": "element: line"
1534
+ },
1535
+ {
1536
+ "type": "text",
1537
+ "bbox": [
1538
+ 0.43,
1539
+ 0.144,
1540
+ 0.519,
1541
+ 0.15
1542
+ ],
1543
+ "angle": 0,
1544
+ "content": "\"Bbox:[84,397,377,414]"
1545
+ },
1546
+ {
1547
+ "type": "text",
1548
+ "bbox": [
1549
+ 0.43,
1550
+ 0.151,
1551
+ 0.613,
1552
+ 0.157
1553
+ ],
1554
+ "angle": 0,
1555
+ "content": "'value': 'functional processes and pathways involved.']."
1556
+ },
1557
+ {
1558
+ "type": "text",
1559
+ "bbox": [
1560
+ 0.398,
1561
+ 0.158,
1562
+ 0.483,
1563
+ 0.164
1564
+ ],
1565
+ "angle": 0,
1566
+ "content": "element':text'"
1567
+ },
1568
+ {
1569
+ "type": "text",
1570
+ "bbox": [
1571
+ 0.398,
1572
+ 0.164,
1573
+ 0.484,
1574
+ 0.17
1575
+ ],
1576
+ "angle": 0,
1577
+ "content": "'bbox': [84, 410, 484, 498],"
1578
+ },
1579
+ {
1580
+ "type": "text",
1581
+ "bbox": [
1582
+ 0.43,
1583
+ 0.171,
1584
+ 0.5,
1585
+ 0.176
1586
+ ],
1587
+ "angle": 0,
1588
+ "content": "ents': ['element': 'line',"
1589
+ },
1590
+ {
1591
+ "type": "text",
1592
+ "bbox": [
1593
+ 0.43,
1594
+ 0.177,
1595
+ 0.527,
1596
+ 0.182
1597
+ ],
1598
+ "angle": 0,
1599
+ "content": "'bbox': [109, 411, 484, 428],..."
1600
+ },
1601
+ {
1602
+ "type": "title",
1603
+ "bbox": [
1604
+ 0.398,
1605
+ 0.183,
1606
+ 0.429,
1607
+ 0.188
1608
+ ],
1609
+ "angle": 0,
1610
+ "content": "system>"
1611
+ },
1612
+ {
1613
+ "type": "text",
1614
+ "bbox": [
1615
+ 0.394,
1616
+ 0.189,
1617
+ 0.621,
1618
+ 0.201
1619
+ ],
1620
+ "angle": 0,
1621
+ "content": "User: Locate where changes are required to perform this query, 'Move the top figure from the center of the page to left'."
1622
+ },
1623
+ {
1624
+ "type": "text",
1625
+ "bbox": [
1626
+ 0.394,
1627
+ 0.202,
1628
+ 0.621,
1629
+ 0.213
1630
+ ],
1631
+ "angle": 0,
1632
+ "content": "System: The figure that is required to move is located at [241, 88, 756, 323]."
1633
+ },
1634
+ {
1635
+ "type": "list",
1636
+ "bbox": [
1637
+ 0.394,
1638
+ 0.189,
1639
+ 0.621,
1640
+ 0.213
1641
+ ],
1642
+ "angle": 0,
1643
+ "content": null
1644
+ },
1645
+ {
1646
+ "type": "text",
1647
+ "bbox": [
1648
+ 0.394,
1649
+ 0.219,
1650
+ 0.604,
1651
+ 0.23
1652
+ ],
1653
+ "angle": 0,
1654
+ "content": "User: Find the specific location requiring changes to perform this request. \"Underline the title.\" Results and Discussion"
1655
+ },
1656
+ {
1657
+ "type": "text",
1658
+ "bbox": [
1659
+ 0.394,
1660
+ 0.231,
1661
+ 0.619,
1662
+ 0.243
1663
+ ],
1664
+ "angle": 0,
1665
+ "content": "System: The title \"Results and Discussion\" can be found at [513, 675, 721, 691]."
1666
+ },
1667
+ {
1668
+ "type": "list",
1669
+ "bbox": [
1670
+ 0.394,
1671
+ 0.219,
1672
+ 0.619,
1673
+ 0.243
1674
+ ],
1675
+ "angle": 0,
1676
+ "content": null
1677
+ },
1678
+ {
1679
+ "type": "title",
1680
+ "bbox": [
1681
+ 0.394,
1682
+ 0.247,
1683
+ 0.439,
1684
+ 0.254
1685
+ ],
1686
+ "angle": 0,
1687
+ "content": "Detections"
1688
+ },
1689
+ {
1690
+ "type": "text",
1691
+ "bbox": [
1692
+ 0.398,
1693
+ 0.255,
1694
+ 0.421,
1695
+ 0.261
1696
+ ],
1697
+ "angle": 0,
1698
+ "content": "'eleme"
1699
+ },
1700
+ {
1701
+ "type": "text",
1702
+ "bbox": [
1703
+ 0.398,
1704
+ 0.262,
1705
+ 0.474,
1706
+ 0.267
1707
+ ],
1708
+ "angle": 0,
1709
+ "content": "'bbox': [76, 76, 179, 95],"
1710
+ },
1711
+ {
1712
+ "type": "text",
1713
+ "bbox": [
1714
+ 0.398,
1715
+ 0.267,
1716
+ 0.421,
1717
+ 0.272
1718
+ ],
1719
+ "angle": 0,
1720
+ "content": "'subele"
1721
+ },
1722
+ {
1723
+ "type": "text",
1724
+ "bbox": [
1725
+ 0.432,
1726
+ 0.274,
1727
+ 0.482,
1728
+ 0.279
1729
+ ],
1730
+ "angle": 0,
1731
+ "content": "'element': 'line'"
1732
+ },
1733
+ {
1734
+ "type": "text",
1735
+ "bbox": [
1736
+ 0.432,
1737
+ 0.28,
1738
+ 0.513,
1739
+ 0.286
1740
+ ],
1741
+ "angle": 0,
1742
+ "content": "'bbox': [76, 76, 179, 94],"
1743
+ },
1744
+ {
1745
+ "type": "text",
1746
+ "bbox": [
1747
+ 0.432,
1748
+ 0.286,
1749
+ 0.505,
1750
+ 0.291
1751
+ ],
1752
+ "angle": 0,
1753
+ "content": "value':AppendDIX}\\}},"
1754
+ },
1755
+ {
1756
+ "type": "text",
1757
+ "bbox": [
1758
+ 0.398,
1759
+ 0.292,
1760
+ 0.415,
1761
+ 0.297
1762
+ ],
1763
+ "angle": 0,
1764
+ "content": "'elem"
1765
+ },
1766
+ {
1767
+ "type": "text",
1768
+ "bbox": [
1769
+ 0.398,
1770
+ 0.298,
1771
+ 0.483,
1772
+ 0.304
1773
+ ],
1774
+ "angle": 0,
1775
+ "content": "'bbox': [76, 123, 489, 183]"
1776
+ },
1777
+ {
1778
+ "type": "text",
1779
+ "bbox": [
1780
+ 0.398,
1781
+ 0.304,
1782
+ 0.449,
1783
+ 0.309
1784
+ ],
1785
+ "angle": 0,
1786
+ "content": "'subelements': I"
1787
+ },
1788
+ {
1789
+ "type": "text",
1790
+ "bbox": [
1791
+ 0.432,
1792
+ 0.31,
1793
+ 0.483,
1794
+ 0.315
1795
+ ],
1796
+ "angle": 0,
1797
+ "content": "element: line."
1798
+ },
1799
+ {
1800
+ "type": "text",
1801
+ "bbox": [
1802
+ 0.432,
1803
+ 0.316,
1804
+ 0.519,
1805
+ 0.321
1806
+ ],
1807
+ "angle": 0,
1808
+ "content": "'bbox':[76,122,485,139]"
1809
+ },
1810
+ {
1811
+ "type": "text",
1812
+ "bbox": [
1813
+ 0.433,
1814
+ 0.322,
1815
+ 0.559,
1816
+ 0.328
1817
+ ],
1818
+ "angle": 0,
1819
+ "content": "'value': 'As stated in Section 2.1, the..."
1820
+ },
1821
+ {
1822
+ "type": "list",
1823
+ "bbox": [
1824
+ 0.432,
1825
+ 0.31,
1826
+ 0.559,
1827
+ 0.328
1828
+ ],
1829
+ "angle": 0,
1830
+ "content": null
1831
+ },
1832
+ {
1833
+ "type": "text",
1834
+ "bbox": [
1835
+ 0.68,
1836
+ 0.088,
1837
+ 0.854,
1838
+ 0.109
1839
+ ],
1840
+ "angle": 0,
1841
+ "content": "User: Locate where changes are required to perform this query, 'Add \"Introduction\" as a new title before \"Appendix\"."
1842
+ },
1843
+ {
1844
+ "type": "text",
1845
+ "bbox": [
1846
+ 0.682,
1847
+ 0.113,
1848
+ 0.854,
1849
+ 0.126
1850
+ ],
1851
+ "angle": 0,
1852
+ "content": "System: The location where \"Appendix\" is found is at [76, 76, 179, 95]."
1853
+ },
1854
+ {
1855
+ "type": "text",
1856
+ "bbox": [
1857
+ 0.682,
1858
+ 0.138,
1859
+ 0.851,
1860
+ 0.157
1861
+ ],
1862
+ "angle": 0,
1863
+ "content": "User: Find the specific location requiring changes to perform this request, 'Delete the third list item under \"A.1.Environment\":"
1864
+ },
1865
+ {
1866
+ "type": "text",
1867
+ "bbox": [
1868
+ 0.682,
1869
+ 0.162,
1870
+ 0.851,
1871
+ 0.176
1872
+ ],
1873
+ "angle": 0,
1874
+ "content": "System: The third list item under \"A.1.Environment\" can be found at [92, 398, 485, 459]."
1875
+ },
1876
+ {
1877
+ "type": "text",
1878
+ "bbox": [
1879
+ 0.682,
1880
+ 0.187,
1881
+ 0.837,
1882
+ 0.193
1883
+ ],
1884
+ "angle": 0,
1885
+ "content": "User: Find the place that needs to be edited to"
1886
+ },
1887
+ {
1888
+ "type": "text",
1889
+ "bbox": [
1890
+ 0.682,
1891
+ 0.194,
1892
+ 0.839,
1893
+ 0.2
1894
+ ],
1895
+ "angle": 0,
1896
+ "content": "process this query, 'Move the list under \"A.1.4.\""
1897
+ },
1898
+ {
1899
+ "type": "text",
1900
+ "bbox": [
1901
+ 0.682,
1902
+ 0.201,
1903
+ 0.838,
1904
+ 0.207
1905
+ ],
1906
+ "angle": 0,
1907
+ "content": "\"Brain-Body Interface\" to the bottom of the page!"
1908
+ },
1909
+ {
1910
+ "type": "text",
1911
+ "bbox": [
1912
+ 0.682,
1913
+ 0.212,
1914
+ 0.852,
1915
+ 0.225
1916
+ ],
1917
+ "angle": 0,
1918
+ "content": "System: The list under \"A.1.4. Brain-Body Interface\" can be found at [514, 175, 926, 531]."
1919
+ },
1920
+ {
1921
+ "type": "text",
1922
+ "bbox": [
1923
+ 0.682,
1924
+ 0.231,
1925
+ 0.853,
1926
+ 0.249
1927
+ ],
1928
+ "angle": 0,
1929
+ "content": "User: Please find the area that needs editing to carry out this user request, 'Merge the two list items under \"A.1.5. Experiment\"."
1930
+ },
1931
+ {
1932
+ "type": "text",
1933
+ "bbox": [
1934
+ 0.682,
1935
+ 0.255,
1936
+ 0.853,
1937
+ 0.274
1938
+ ],
1939
+ "angle": 0,
1940
+ "content": "System: The two list items under \"A.1.5. Experiment\" can be found at [530, 568, 923, 658] and [530, 657, 923, 806]."
1941
+ },
1942
+ {
1943
+ "type": "text",
1944
+ "bbox": [
1945
+ 0.682,
1946
+ 0.286,
1947
+ 0.853,
1948
+ 0.305
1949
+ ],
1950
+ "angle": 0,
1951
+ "content": "User: Please locate the part that needs modification to process this query, \"Swap the order of the first two list items under \"A.1. Design and Editing\"."
1952
+ },
1953
+ {
1954
+ "type": "text",
1955
+ "bbox": [
1956
+ 0.682,
1957
+ 0.31,
1958
+ 0.852,
1959
+ 0.33
1960
+ ],
1961
+ "angle": 0,
1962
+ "content": "System: The first two list items under \"A.1. Design and Editing\" can be found at [92, 236, 485, 341] and [92, 339, 485, 400]."
1963
+ },
1964
+ {
1965
+ "type": "list",
1966
+ "bbox": [
1967
+ 0.68,
1968
+ 0.088,
1969
+ 0.854,
1970
+ 0.33
1971
+ ],
1972
+ "angle": 0,
1973
+ "content": null
1974
+ },
1975
+ {
1976
+ "type": "image_caption",
1977
+ "bbox": [
1978
+ 0.206,
1979
+ 0.345,
1980
+ 0.788,
1981
+ 0.361
1982
+ ],
1983
+ "angle": 0,
1984
+ "content": "Figure 7: Illustration of LLM prompting to generate synthetic instruction tuning data."
1985
+ },
1986
+ {
1987
+ "type": "text",
1988
+ "bbox": [
1989
+ 0.15,
1990
+ 0.38,
1991
+ 0.483,
1992
+ 0.4
1993
+ ],
1994
+ "angle": 0,
1995
+ "content": "paragraph\\\" or \\\\\" move the table in between the two figures from top to bottom \\\"."
1996
+ },
1997
+ {
1998
+ "type": "text",
1999
+ "bbox": [
2000
+ 0.117,
2001
+ 0.409,
2002
+ 0.456,
2003
+ 0.437
2004
+ ],
2005
+ "angle": 0,
2006
+ "content": "3. Please emphasize edit requests that deal with more abstract pdf elements rather low level elements like words and lines."
2007
+ },
2008
+ {
2009
+ "type": "text",
2010
+ "bbox": [
2011
+ 0.117,
2012
+ 0.447,
2013
+ 0.47,
2014
+ 0.474
2015
+ ],
2016
+ "angle": 0,
2017
+ "content": "4. Use the x and y coordinates of the pdf-elements to figure out the relative spatial relations left of, right of."
2018
+ },
2019
+ {
2020
+ "type": "text",
2021
+ "bbox": [
2022
+ 0.117,
2023
+ 0.476,
2024
+ 0.469,
2025
+ 0.493
2026
+ ],
2027
+ "angle": 0,
2028
+ "content": "top, bottom or to assess which paragraph is first, second or third."
2029
+ },
2030
+ {
2031
+ "type": "text",
2032
+ "bbox": [
2033
+ 0.117,
2034
+ 0.503,
2035
+ 0.484,
2036
+ 0.541
2037
+ ],
2038
+ "angle": 0,
2039
+ "content": "5. Please count the number and order of pdf elements sequentially and correctly. Mistakes in counting are costly. Please think step by step while doing it."
2040
+ },
2041
+ {
2042
+ "type": "text",
2043
+ "bbox": [
2044
+ 0.117,
2045
+ 0.55,
2046
+ 0.484,
2047
+ 0.598
2048
+ ],
2049
+ "angle": 0,
2050
+ "content": "6. Please do not generate edit requests that implies modifying multiple pdf elements when there exists actually only one, for example, asking to swap the order of list-items when acutely there is a single list item."
2051
+ },
2052
+ {
2053
+ "type": "text",
2054
+ "bbox": [
2055
+ 0.117,
2056
+ 0.607,
2057
+ 0.478,
2058
+ 0.636
2059
+ ],
2060
+ "angle": 0,
2061
+ "content": "7. Please do not generate edits that call for left/right column, where there exists only a single column."
2062
+ },
2063
+ {
2064
+ "type": "text",
2065
+ "bbox": [
2066
+ 0.117,
2067
+ 0.645,
2068
+ 0.484,
2069
+ 0.703
2070
+ ],
2071
+ "angle": 0,
2072
+ "content": "8. In the presence of list, please generate at least one edit requests on list bullets. The output then should also refer only to the bullets, not the whole list. Check to see whether the bullets are numeric, alphabetic, or plain circular bullets before requesting its editing."
2073
+ },
2074
+ {
2075
+ "type": "text",
2076
+ "bbox": [
2077
+ 0.117,
2078
+ 0.712,
2079
+ 0.482,
2080
+ 0.74
2081
+ ],
2082
+ "angle": 0,
2083
+ "content": "9. Please geenrate the minimal bounding box that suffices to make the edit, i.e. ground the user request to the most fine-grained pdf-element."
2084
+ },
2085
+ {
2086
+ "type": "text",
2087
+ "bbox": [
2088
+ 0.117,
2089
+ 0.749,
2090
+ 0.482,
2091
+ 0.806
2092
+ ],
2093
+ "angle": 0,
2094
+ "content": "10. Please generate multiple back and forth conversations. The user query should follow this format: \"<phrase requesting the location>, \\\"<user edit query>\\\". For example: 'Please find the location to make this edit, \\ \"Change list bullets to numeric\\\"."
2095
+ },
2096
+ {
2097
+ "type": "text",
2098
+ "bbox": [
2099
+ 0.117,
2100
+ 0.815,
2101
+ 0.482,
2102
+ 0.845
2103
+ ],
2104
+ "angle": 0,
2105
+ "content": "11. Only generate questions and answers that can be answered with the given information, do not use background information."
2106
+ },
2107
+ {
2108
+ "type": "text",
2109
+ "bbox": [
2110
+ 0.117,
2111
+ 0.853,
2112
+ 0.449,
2113
+ 0.872
2114
+ ],
2115
+ "angle": 0,
2116
+ "content": "12. Please do not hallucinate pdf elements, its content, its subelements or edit requests."
2117
+ },
2118
+ {
2119
+ "type": "text",
2120
+ "bbox": [
2121
+ 0.117,
2122
+ 0.881,
2123
+ 0.478,
2124
+ 0.902
2125
+ ],
2126
+ "angle": 0,
2127
+ "content": "13. It's better to not generate requests instead of generating wrong requests."
2128
+ },
2129
+ {
2130
+ "type": "list",
2131
+ "bbox": [
2132
+ 0.117,
2133
+ 0.409,
2134
+ 0.484,
2135
+ 0.902
2136
+ ],
2137
+ "angle": 0,
2138
+ "content": null
2139
+ },
2140
+ {
2141
+ "type": "title",
2142
+ "bbox": [
2143
+ 0.51,
2144
+ 0.375,
2145
+ 0.822,
2146
+ 0.393
2147
+ ],
2148
+ "angle": 0,
2149
+ "content": "C Code-NLI Prompt and Samples"
2150
+ },
2151
+ {
2152
+ "type": "text",
2153
+ "bbox": [
2154
+ 0.508,
2155
+ 0.401,
2156
+ 0.885,
2157
+ 0.482
2158
+ ],
2159
+ "angle": 0,
2160
+ "content": "To clean noisy synthetic data, we employ Code-NLI. The prompt used for Code-NLI includes a system prompt, that contains Python API function definitions, and few-shot prompts. We provide the system prompt given below:"
2161
+ },
2162
+ {
2163
+ "type": "text",
2164
+ "bbox": [
2165
+ 0.512,
2166
+ 0.493,
2167
+ 0.879,
2168
+ 0.599
2169
+ ],
2170
+ "angle": 0,
2171
+ "content": "You are an AI visual assistant that can do Natural Language Inference using python code. You will receive information describing a hypothesis. This includes a user edit request on pdf and a system output specifying the edit location with a bounding box. Your job is to create a python code, using listed APIs, that verifies that the bounding box correctly localizes the edit target in the user request. Each statement in the python code needs to be true to verify the correctness of the hypothesis."
2172
+ },
2173
+ {
2174
+ "type": "text",
2175
+ "bbox": [
2176
+ 0.512,
2177
+ 0.607,
2178
+ 0.879,
2179
+ 0.751
2180
+ ],
2181
+ "angle": 0,
2182
+ "content": "The user edit request focuses on editing different type of pdf elements: title, paragraph, list, figure, and table. The elements could have subelements: title -> {bullets -> {bullet}, list_item -> {line}}, paragraph -> {line}, title -> {line}. It refers to different elements in a compositional and hierarchical manner. For example, last line of second paragraph, first list-item of third list, etc. The system output localizes the element/subelement requiring edit with a bounding box. The format of the bounding box is [x1,y1,x2,y2] where (x1,y1) is the coordinate of top left corner and (x2,y2) is bottom right corner, where x2>x1 and y2>y1."
2183
+ },
2184
+ {
2185
+ "type": "text",
2186
+ "bbox": [
2187
+ 0.512,
2188
+ 0.758,
2189
+ 0.88,
2190
+ 0.835
2191
+ ],
2192
+ "angle": 0,
2193
+ "content": "Please use the following APIs to verify that the output bounding box indeed correctly captures the edit element/subelement. Please output only the python code using the listed APIs and nothing else. The element (dict) follows this format: {element_type: <element_type>,_bbox: [<x1,y1,x2,y2>]}, value: <value>, subelements:[<sub>subelement1,..]}."
2194
+ },
2195
+ {
2196
+ "type": "text",
2197
+ "bbox": [
2198
+ 0.512,
2199
+ 0.843,
2200
+ 0.853,
2201
+ 0.921
2202
+ ],
2203
+ "angle": 0,
2204
+ "content": "def is_element.element_type: str, bbox: list) -> bool:\n \"checks whether the element in_bbox is element_type\"\n \"Args: element_type (str): any element type ' title', 'paragraph', 'list', 'line', 'figure', 'table', 'bullets', '"
2205
+ },
2206
+ {
2207
+ "type": "list",
2208
+ "bbox": [
2209
+ 0.512,
2210
+ 0.493,
2211
+ 0.88,
2212
+ 0.921
2213
+ ],
2214
+ "angle": 0,
2215
+ "content": null
2216
+ },
2217
+ {
2218
+ "type": "page_number",
2219
+ "bbox": [
2220
+ 0.476,
2221
+ 0.928,
2222
+ 0.526,
2223
+ 0.941
2224
+ ],
2225
+ "angle": 0,
2226
+ "content": "31134"
2227
+ }
2228
+ ],
2229
+ [
2230
+ {
2231
+ "type": "code",
2232
+ "bbox": [
2233
+ 0.11,
2234
+ 0.087,
2235
+ 0.498,
2236
+ 0.915
2237
+ ],
2238
+ "angle": 0,
2239
+ "content": "```\ndef in_sectionsection section_title: str, bbox: list) -> bool:\n '''checks whether the bbox is in the section_title''' \n '''Args: section_title (str): the string contained in the title \n bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n ''' \ndef contains_string (string: str,_bbox: list) -> bool:\n '''checks whether the element in_bbox contains string''' \n '''Args: string (str): string to check \n _bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n ''' \ndef check_element_order_from_top_or_title( top_or_title (str), element_type: str,_bbox: \n list, order: int, column: str = None) -> bool: \n '''checks the sequence order of the element in \n _bbox of element_type from top of pdf or the section title where it lies''' \n '''Args: \n top_or_title (str): Either 'top' or 'title' from where to beign counting \n element_type (str): First hierarchy elements -- 'title', 'paragraph', 'list', 'figure' or 'table' \n _bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n order (int): denoting the sequence order of bounding box of element_type. 1,2,3 order counts from top, -1,-2.. order counts from last \n column (str): to check the order in left or right column. Defaults to None when pdf is single column. \n */\ndef check_subelement_order_from_top_or_title( top_or_title (str), subelement_type: str,_bbox: \n list, order: int, column: str = None) -> int: \n '''checks the sequence order of the subelement in bounding box of subelement_type from top of pdf or the section title where it lies''' \n '''Args: \n top_or_title (str): Either 'top' or 'title' from where to beign counting \n subelement_type (str): Second hierarchy elements -- 'line' (title and paragraph subelement), \\ 'bullets' (list subelement), '\\list_item' (list subelement) \n _bbox (list): a bounding box of the type [ x1, y1, x2, y2] \n order (int): denoting the sequence order of bounding box of subelement_type. 1,2,3 order counts from top, -1,-2.. order counts from last \n column (str): to check the order in left or right column. Defaults to None when pdf is single column. \n */\ndef check_subelement_order_in_element( subelement_type: str,_bbox: list, order: int) -> int: \n '''checks the sequence order of the subelement in bounding box of subelement_type within the element in which_bbox lies''' \n '''Args: \n subelement_type (str): Second hierarchy elements -- 'line' (title and paragraph subelement), 'bullets' (list subelement), 'list_item' (list subelement) \n _bbox (list): a bounding box of the type ["
2240
+ },
2241
+ {
2242
+ "type": "code",
2243
+ "bbox": [
2244
+ 0.501,
2245
+ 0.087,
2246
+ 0.892,
2247
+ 0.591
2248
+ ],
2249
+ "angle": 0,
2250
+ "content": "```c\nx1, y1, x2, y2]\norder (int): denoting the sequence order of_bbox of subelement_type. 1,2,3\norder counts from top, -1,-2.. order counts from last\n\" \"\ndef check_subsubelement_order_from_top_or_title(top_or_title (str), subsublement_type: str,_bbox: list, order: int, column: str = None) -> int:\n \" checks the sequence order of the subsublement in_bbox of subsublement_type from top of pdf or the section title where it lies\"\n \"Args:\n top_or_title (str): Either 'top' or 'title' from where to beign counting subsublement_type (str): Third hierarchy elements -- 'bullet' (subsublement of bullets), 'line' (subsublement of list_item)\nbbox (list): a bounding box of the type [x1, y1, x2, y2]\norder (int): denoting the sequence order of_bbox of subelement_type. 1,2,3 order counts from top, -1,-2.. order counts from last column (str): to check the order in left or right column. Defaults to None when pdf is single column.\n\" \"\ndef check_subsubelement_order_in_subelement(subsublement_type: str,_bbox: list, order: int) -> int:\n \" checks the sequence order of the subsublement in_bbox of subsublement_type within the subelement in which_bbox lies\"\n \"Args:\n subsublement_type (str): Third hierarchy elements -- 'bullet' (subsublement of bullets), 'line' (subsublement of list_item)\nbbox (list): a bounding box of the type [x1, y1, x2, y2]\norder (int): denoting the sequence order of_bbox of subsublement_type. 1,2,3 order counts from top, -1,-2.. order counts from last"
2251
+ },
2252
+ {
2253
+ "type": "text",
2254
+ "bbox": [
2255
+ 0.509,
2256
+ 0.597,
2257
+ 0.885,
2258
+ 0.63
2259
+ ],
2260
+ "angle": 0,
2261
+ "content": "We also include some sample Python code generated by Code-NLI for verification in Figure 8."
2262
+ },
2263
+ {
2264
+ "type": "title",
2265
+ "bbox": [
2266
+ 0.51,
2267
+ 0.643,
2268
+ 0.698,
2269
+ 0.658
2270
+ ],
2271
+ "angle": 0,
2272
+ "content": "D Dataset Statistics"
2273
+ },
2274
+ {
2275
+ "type": "text",
2276
+ "bbox": [
2277
+ 0.508,
2278
+ 0.671,
2279
+ 0.885,
2280
+ 0.783
2281
+ ],
2282
+ "angle": 0,
2283
+ "content": "The total size of the synthetically generated data is 3M samples. The topic range is the same as the topic distribution of the base datasets: scientific articles, patents, finance, tenders, laws, and manuals. Similarly, image resolution is the same as base datasets: \\(1025 \\times 1025\\) for DocLayNet and \\(800 \\times 600\\) for PubLayNet."
2284
+ },
2285
+ {
2286
+ "type": "title",
2287
+ "bbox": [
2288
+ 0.51,
2289
+ 0.798,
2290
+ 0.719,
2291
+ 0.813
2292
+ ],
2293
+ "angle": 0,
2294
+ "content": "E Model Architecture"
2295
+ },
2296
+ {
2297
+ "type": "text",
2298
+ "bbox": [
2299
+ 0.508,
2300
+ 0.825,
2301
+ 0.885,
2302
+ 0.922
2303
+ ],
2304
+ "angle": 0,
2305
+ "content": ") Our model, DELOC, is based on Phi-3-V model. The Phi-3-V model's architecture is essentially composed of two components - an image encoder (CLIP ViT-L/14 (Radford et al., 2021)) and a transformer decoder (phi-3-mini). The visual tokens from the image encoder and concatenate with text"
2306
+ },
2307
+ {
2308
+ "type": "page_number",
2309
+ "bbox": [
2310
+ 0.476,
2311
+ 0.928,
2312
+ 0.526,
2313
+ 0.941
2314
+ ],
2315
+ "angle": 0,
2316
+ "content": "31135"
2317
+ }
2318
+ ],
2319
+ [
2320
+ {
2321
+ "type": "text",
2322
+ "bbox": [
2323
+ 0.147,
2324
+ 0.089,
2325
+ 0.443,
2326
+ 0.128
2327
+ ],
2328
+ "angle": 0,
2329
+ "content": "User: Please locate the part that needs to be edited to process this query, 'Delete the last line of the last paragraph in the left column'."
2330
+ },
2331
+ {
2332
+ "type": "text",
2333
+ "bbox": [
2334
+ 0.147,
2335
+ 0.139,
2336
+ 0.446,
2337
+ 0.177
2338
+ ],
2339
+ "angle": 0,
2340
+ "content": "System: The last paragraph in the left column is located at [84, 654, 484, 813] and the last line is at [84, 797, 140, 814]."
2341
+ },
2342
+ {
2343
+ "type": "image",
2344
+ "bbox": [
2345
+ 0.461,
2346
+ 0.114,
2347
+ 0.531,
2348
+ 0.157
2349
+ ],
2350
+ "angle": 0,
2351
+ "content": null
2352
+ },
2353
+ {
2354
+ "type": "text",
2355
+ "bbox": [
2356
+ 0.536,
2357
+ 0.089,
2358
+ 0.814,
2359
+ 0.178
2360
+ ],
2361
+ "angle": 0,
2362
+ "content": "is_element('list_item', [92,398,485,459]) \nin_section('A.1.Environment', [92,398,485,459]) \ncheck_subelement_order_in_element('list_item', [92,398,485,459], 2)"
2363
+ },
2364
+ {
2365
+ "type": "text",
2366
+ "bbox": [
2367
+ 0.147,
2368
+ 0.202,
2369
+ 0.446,
2370
+ 0.251
2371
+ ],
2372
+ "angle": 0,
2373
+ "content": "User: Please find the location that needs to be edited to process this query, 'Delete the reference \"[23]\" from the second paragraph in the left column'."
2374
+ },
2375
+ {
2376
+ "type": "text",
2377
+ "bbox": [
2378
+ 0.147,
2379
+ 0.264,
2380
+ 0.434,
2381
+ 0.303
2382
+ ],
2383
+ "angle": 0,
2384
+ "content": "System: The second paragraph in the left column is located at [84, 410, 484, 498] and the reference \"[23]\" is at [84, 482, 167, 499]."
2385
+ },
2386
+ {
2387
+ "type": "image",
2388
+ "bbox": [
2389
+ 0.458,
2390
+ 0.228,
2391
+ 0.53,
2392
+ 0.27
2393
+ ],
2394
+ "angle": 0,
2395
+ "content": null
2396
+ },
2397
+ {
2398
+ "type": "text",
2399
+ "bbox": [
2400
+ 0.537,
2401
+ 0.202,
2402
+ 0.805,
2403
+ 0.303
2404
+ ],
2405
+ "angle": 0,
2406
+ "content": "is_element('paragraph', [84, 410, 484, 498]) \ncheck_element_order_from_top_or_title('top', 'paragraph', [84, 410, 484, 498], 2, column='left') \ncontains_string(['[23]', [84, 482, 167, 499])"
2407
+ },
2408
+ {
2409
+ "type": "image_caption",
2410
+ "bbox": [
2411
+ 0.293,
2412
+ 0.315,
2413
+ 0.701,
2414
+ 0.33
2415
+ ],
2416
+ "angle": 0,
2417
+ "content": "Figure 8: Sample verification code generated by Code-NLI."
2418
+ },
2419
+ {
2420
+ "type": "image",
2421
+ "bbox": [
2422
+ 0.127,
2423
+ 0.344,
2424
+ 0.346,
2425
+ 0.557
2426
+ ],
2427
+ "angle": 0,
2428
+ "content": null
2429
+ },
2430
+ {
2431
+ "type": "image_footnote",
2432
+ "bbox": [
2433
+ 0.127,
2434
+ 0.563,
2435
+ 0.343,
2436
+ 0.601
2437
+ ],
2438
+ "angle": 0,
2439
+ "content": "The length of the table \"Exhibit 2\" is increased. The spread of the table is increased towards the left margin.."
2440
+ },
2441
+ {
2442
+ "type": "image",
2443
+ "bbox": [
2444
+ 0.356,
2445
+ 0.345,
2446
+ 0.599,
2447
+ 0.569
2448
+ ],
2449
+ "angle": 0,
2450
+ "content": null
2451
+ },
2452
+ {
2453
+ "type": "image_footnote",
2454
+ "bbox": [
2455
+ 0.362,
2456
+ 0.573,
2457
+ 0.591,
2458
+ 0.599
2459
+ ],
2460
+ "angle": 0,
2461
+ "content": "Moved the text \"Golden Chopstick, Takeaway menu\" above the purple line"
2462
+ },
2463
+ {
2464
+ "type": "image",
2465
+ "bbox": [
2466
+ 0.606,
2467
+ 0.345,
2468
+ 0.875,
2469
+ 0.563
2470
+ ],
2471
+ "angle": 0,
2472
+ "content": null
2473
+ },
2474
+ {
2475
+ "type": "image_footnote",
2476
+ "bbox": [
2477
+ 0.691,
2478
+ 0.586,
2479
+ 0.877,
2480
+ 0.6
2481
+ ],
2482
+ "angle": 0,
2483
+ "content": "GT: Pred:"
2484
+ },
2485
+ {
2486
+ "type": "image_caption",
2487
+ "bbox": [
2488
+ 0.172,
2489
+ 0.612,
2490
+ 0.822,
2491
+ 0.627
2492
+ ],
2493
+ "angle": 0,
2494
+ "content": "Figure 9: Additional Qualitative sample prediction from DELOC demonstrating its capabilities."
2495
+ },
2496
+ {
2497
+ "type": "text",
2498
+ "bbox": [
2499
+ 0.113,
2500
+ 0.643,
2501
+ 0.489,
2502
+ 0.689
2503
+ ],
2504
+ "angle": 0,
2505
+ "content": "tokens in an interleaved manner and fed to the transformer decoder to output the prediction. In total, the number of parameters totals 4.2B."
2506
+ },
2507
+ {
2508
+ "type": "text",
2509
+ "bbox": [
2510
+ 0.114,
2511
+ 0.691,
2512
+ 0.487,
2513
+ 0.723
2514
+ ],
2515
+ "angle": 0,
2516
+ "content": "The Phi-3-mini is a transformer decoder with 3072 hidden dimension, 32 heads and 32 layers."
2517
+ },
2518
+ {
2519
+ "type": "title",
2520
+ "bbox": [
2521
+ 0.114,
2522
+ 0.735,
2523
+ 0.388,
2524
+ 0.751
2525
+ ],
2526
+ "angle": 0,
2527
+ "content": "F Additional Training Details"
2528
+ },
2529
+ {
2530
+ "type": "text",
2531
+ "bbox": [
2532
+ 0.113,
2533
+ 0.761,
2534
+ 0.49,
2535
+ 0.921
2536
+ ],
2537
+ "angle": 0,
2538
+ "content": "The input image to our model is resized to \\(900 \\times 900\\). The text-heavy nature of PDF images requires that the model handle image resolutions dynamically according to the resolution of the text. Higher resolution text should result in higher image input resolution, and vice versa. This is taken care of by the dynamic cropping strategy (Dong et al., 2024) in the Phi-3-V model. It allows the input image to be dynamically cropped into a 2d array of blocks, which are then concatenated to represent the whole"
2539
+ },
2540
+ {
2541
+ "type": "text",
2542
+ "bbox": [
2543
+ 0.51,
2544
+ 0.643,
2545
+ 0.565,
2546
+ 0.658
2547
+ ],
2548
+ "angle": 0,
2549
+ "content": "image."
2550
+ },
2551
+ {
2552
+ "type": "title",
2553
+ "bbox": [
2554
+ 0.509,
2555
+ 0.671,
2556
+ 0.833,
2557
+ 0.687
2558
+ ],
2559
+ "angle": 0,
2560
+ "content": "G Additional Qualitative Examples"
2561
+ },
2562
+ {
2563
+ "type": "text",
2564
+ "bbox": [
2565
+ 0.508,
2566
+ 0.696,
2567
+ 0.885,
2568
+ 0.76
2569
+ ],
2570
+ "angle": 0,
2571
+ "content": "We add more qualitative samples to further illustrate the capabilities of DELOC. As can be seen in Figure 9, DELOC is able to ground user edit requests well both for PDFs and design documents."
2572
+ },
2573
+ {
2574
+ "type": "page_number",
2575
+ "bbox": [
2576
+ 0.476,
2577
+ 0.928,
2578
+ 0.526,
2579
+ 0.941
2580
+ ],
2581
+ "angle": 0,
2582
+ "content": "31136"
2583
+ }
2584
+ ]
2585
+ ]
2025/DELOC_ Document Element Localizer/b1ff4230-9685-4b29-9a1f-525a8461a4fb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02b610b635e0809977308e4e054099fa75b71d740a25f144a7cbc23d7671fad9
3
+ size 1835809
2025/DELOC_ Document Element Localizer/full.md ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DELOC: Document Element Localizer
2
+
3
+ # Hammad Ayyubi $^{1*}$ , Puneet Mathur $^{2}$ , Md Mehrab Tanjim $^{2}$ , Vlad I Morariu $^{2}$
4
+
5
+ $^{1}$ Columbia University, $^{2}$ Adobe Research,
6
+
7
+ Correspondence: hayyubi@cs.columbia.edu
8
+
9
+ # Abstract
10
+
11
+ Editing documents and PDFs using natural language instructions is desirable for many reasons - ease of use, increasing accessibility to non-technical users, and for creativity. To do this automatically, a system needs to first understand the user's intent and convert this to an executable plan or command, and then the system needs to identify or localize the elements that the user desires to edit. While there exist methods that can accomplish these tasks, a major bottleneck in these systems is the inability to ground the spatial edit location effectively. We address this gap through our proposed system, DELOC (Document Element LOCALizer). DELOC adapts the grounding capabilities of existing Multimodal Large Language Model (MLLM) from natural images to PDFs. This adaptation involves two novel contributions: 1) synthetically generating PDF-grounding instruction tuning data from partially annotated datasets; and 2) synthetic data cleaning via Code-NLI, an NLI-inspired process to clean data using generated Python code. The effectiveness of DELOC is apparent in the $>2\mathrm{x}$ zero-shot improvement it achieves over the next best MLLM, GPT-4o.
12
+
13
+ # 1 Introduction
14
+
15
+ Editing documents and PDFs via natural language (Mathur et al., 2023; Suri et al., 2024) is an innovative and user-friendly advancement. This technology makes it easier for non-technical people to edit PDFs, speeds up the editing process, and facilitates document editing on a mobile device.
16
+
17
+ Typically, a two-stage process has been followed to edit PDFs from natural language requests. In the first stage, a model is required to predict the edit location (green highlight in Figure 1) spatially in the PDF. The edit location is a PDF element (paragraph, line, list, table, etc.). The second stage
18
+
19
+ ![](images/3471bc165d1c26d7258906fbdfb45df050adfad0e5090df4880608a459c0b1e1.jpg)
20
+ Figure 1: Illustration of a typical PDF edit process. Models predict Edit Location and Edit Command, which are combined to effect the required edit. This work focuses on improving the bottleneck step - Edit Location. To this end, the proposed system, DELOC, generates large-scale (3M) synthetic data to instruction tune a Multimodal LLM.
21
+
22
+ involves predicting the edit command (replace in Figure 1). The predicted bounding box and edit command can then be simply combined to effect the desired change. Existing PDF editing systems achieve high accuracy $(>85\%)$ in predicting edit commands (Mathur et al., 2023) but struggle with bounding box precision $(<50\%)$ (Suri et al., 2024). As such, we focus on improving the spatial PDF element localization for a given edit request.
23
+
24
+ State-of-the-art systems ground user queries in images (Wang et al., 2023; You et al., 2023), mobile UIs (You et al., 2024), and web UIs (Hong et al., 2023), but they do not transfer directly to PDFs. PDFs are more verbose, and edit requests follow a hierarchical structure requiring domain knowledge. For instance, a request like "first list item of the third list in Section A.1" demands an understanding of the composition: section $\rightarrow$ list $\rightarrow$ list item.
25
+
26
+ To address these challenges, we propose DELOC (Document Element LOCALizer), which adapts the strong grounding capabilities of Mul
27
+
28
+ ![](images/877ec0cf624bc8fc7f88120d472ecff5c21fda422b6ca29317517bcb9a1faebb.jpg)
29
+ Figure 2: Overview of DELOC. Document elements, their bounding box, and their hierarchical relations are first detected (DET) and formatted into a JSON structure (FJ). This JSON is fed to an LLM to generate synthetic data.
30
+
31
+ timodal Large Language Models (MLLMs) from natural images to PDFs. This adaptation involves instruction-tuning an MLLM specifically for PDFs. Our key contribution is the synthetic generation of high-quality instruction-tuning data, followed by an automated cleaning process called Code-NLI.
32
+
33
+ We first represent PDF elements (paragraphs, lines, lists, tables, titles, sections) in a spatially aware hierarchical JSON format (Figure 2), capturing each element's bounding box and hierarchical relationships (e.g., paragraph $\rightarrow$ lines, list $\rightarrow$ list items). This information is sourced from partially annotated PDF Document Analysis datasets (e.g., PubLayNet (Zhong et al., 2019)), using existing annotations and heuristically generating missing ones. Next, we input this structured PDF representation into a Large Language Model (LLM) to generate synthetic user requests for PDF editing, along with system responses specifying edit locations via bounding boxes. The hierarchical structure enables the LLM to produce compositional requests that resemble real user queries, while the bounding box data ensures precise edit location predictions.
34
+
35
+ We further clean noisy generated samples using a proposed process called Code-NLI (Code-based Natural Language Inference). Code-NLI (Figure 3) treats the generated user edit request and system response as the hypothesis, and the PDF as the premise. To verify the hypothesis, it leverages an LLM to generate Python code, which is then executed to filter out noisy samples.
36
+
37
+ To demonstrate the effectiveness of DELOC, we evaluate it on DocEdit (Mathur et al., 2023). Our results demonstrate that DELOC outperforms all existing zero-shot models on PDFs, including proprietary MLLM GPT-4o by $>2\mathrm{x}$ . Our ablations demonstrate DELOC's performance improves with
38
+
39
+ data scale and data cleaning via Code-NLI is crucial to its performance.
40
+
41
+ # 2 Related Work
42
+
43
+ There has been a surge of interest in natural language based PDF edit localization. DocEdit (Mathur et al., 2023) trains a multimodal transformer that directly regresses the edit bounding box given the document as input. DocEdit-v2 (Suri et al., 2024) trains an encoder-decoder architecture that outputs a segmentation mask for the edit location. In training a model from scratch, these methods fail to utilize the excellent grounding capabilities of recent MLLMs.
44
+
45
+ A number of MLLMs (Chen et al., 2023; Zhang et al., 2023; Yuan et al., 2023; Lv et al., 2023; Lai et al., 2024; Ma et al., 2024) have been proposed that have shown impressive grounding abilities on natural images. Works like Ferret-UI (You et al., 2024), Cog-Agent (Hong et al., 2023), and MultiUI (Liu et al., 2024) have successfully adapted these MLLMs for mobile screens and web pages. We take inspiration from these works to adapt MLLMs for PDFs using large-scale synthetic data. In comparison, Agent-DocEdit (Wu et al., 2024) finetunes a grounding model, GLIP (Li* et al., 2022), only on a small training set.
46
+
47
+ # 3 Method
48
+
49
+ Our method adapts the excellent query understanding and grounding capabilities of existing Multimodal Large Language Models (MLLMs) for grounding edit queries in PDFs. To this end, we: 1) generate diverse, compositional, and clean instruction-tuning data; and, 2) instruction-tune a MLLM on the generated data. We describe the synthetic data generation process below.
50
+
51
+ ![](images/ace1861ad173926c3c69e306ef5ebfeb2ba98dfbbb5ca3142b37385db11a6cf0.jpg)
52
+ Figure 3: Code-NLI based data cleaning. Generated data is treated as hypothesis, which is validated by LLM generated Python code, given PDF premise.
53
+
54
+ ![](images/be6a08206f968efc123b8bfa78b055b3bcd2634e1051cb93482a7cd6ed97be57.jpg)
55
+ Generated Code
56
+
57
+ Given a lack of large-scale instruction-tuning data for grounding edit queries in PDFs, we build an automatic synthetic data generation process. We discuss below the two stages of this process:
58
+
59
+ LLM powered data generation. We present the PDF to an LLM and prompt it to generate edit requests along with the corresponding bounding box (see Figure 2). Prompt details in Appendix B. To ensure that the generated edit requests capture the compositional nature of real user queries, we structure PDFs in a hierarchical JSON format that explicitly defines the parent-child relationships of each element. Each element contains information about its type (e.g., paragraph, line, list), its bounding box within the PDF, and its subelements (e.g., a line is a subelement of a paragraph). Additionally, we organize elements sequentially within the same hierarchy based on their y-coordinates, enabling the LLM to generate edit requests that reference these elements numerically (e.g., "second list," "last line"). Moreover, the inclusion of a bounding box for each element allows the LLM to generate precise edit location in the system response for the corresponding edit query.
60
+
61
+ To obtain this hierarchical PDF representation, we require annotations for each PDF element, including its type, bounding box, and hierarchical subelements. However, there does not exist a single dataset that contains all this information for all element types. As such, we use different datasets - PubLayNet and DocLayNet (Pfitzmann et al., 2022) - each containing a subset of these elements. Table 1 shows a non-exhaustive list of these elements and the corresponding dataset containing them. If an element's hierarchical subelements' annotations do not exist natively in the dataset, we use heuristics to create them. These heuristics are rela
62
+
63
+ <table><tr><td></td><td>Title</td><td colspan="2">Paragraph</td><td colspan="3">List</td><td>Tables</td><td>Figures</td></tr><tr><td>Datasets</td><td></td><td></td><td>Lines</td><td></td><td colspan="2">List-items</td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td>Lines</td><td></td><td></td></tr><tr><td>PubLayNet</td><td>✓</td><td>✓</td><td>♠</td><td>✓</td><td>♠</td><td>♠</td><td>✓</td><td>✓</td></tr><tr><td>DocLayNet</td><td>✓</td><td>✓</td><td>✓</td><td>♠</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
64
+
65
+ Table 1: Datasets and their PDF elements - either natively (✓) present or heuristically created (♠). Multiple datasets combine to give extensive elements coverage.
66
+
67
+ tively simple - comparing y-coordinates of words to create lines within paragraphs, comparing x-coordinates of lines to create list-items within lists, and so on. The granularity scope goes as low as words. This means we have annotations for words, equations, formulae, footnotes, page numbers, etc. Essentially, we tried to cover as many PDF elements as possible with the available datasets and heuristics. More details in Appendix A.
68
+
69
+ Data cleaning with Code-NLI. Since the LLM generates data automatically, it can be noisy—for instance, an edit request might reference the second list item while the bounding box corresponds to the third. To filter out such inconsistencies, we draw inspiration from Natural Language Inference (NLI) (Bowman et al., 2015), which predicts whether a hypothesis aligns with a given premise. Here, the premise is the PDF, and the hypothesis is the generated user-system conversation.
70
+
71
+ To verify these conversations, we generate substatements that must hold true. For example, to confirm that a bounding box corresponds to the second list item, a counting statement must evaluate its position as 2 (Figure 3). These sub-statements are generated by an LLM as Pythonic function calls, which are then executed using our implemented APIs for verification. Pythonic statements make verification algorithmic and automatic. API details and prompts in Appendix C.
72
+
73
+ Once we have clean synthetic instruction-tuning data, we finetune a MLLM. The input is an image of the PDF and a user-edit request. The output is a bounding box localizing the edit element.
74
+
75
+ # 4 Experiments
76
+
77
+ To balance quality and the cost of long context, we use LLaMA-3.1 70B (Grattafori and Team, 2024) for generating synthetic data. It is run on 40 Nvidia A100-80 GiB GPUs for 2 days. Next, GPT-4o (OpenAI, 2024) is used in Code-NLI cleaning as small context length allows us to optimize for quality. This results in $\sim 3\mathrm{M}$ samples for instruction tuning. We keep $95\%$ samples for training and the rest
78
+
79
+ <table><tr><td rowspan="2">Model</td><td colspan="3">PDF</td><td colspan="3">Design</td></tr><tr><td>A@0.5</td><td>A@0.30</td><td>A@0.25</td><td>A@0.5</td><td>A@0.30</td><td>A@0.25</td></tr><tr><td colspan="7">Zero-Shot</td></tr><tr><td>GPT-4o</td><td>5.624</td><td>14.258</td><td>17.885</td><td>28.961</td><td>46.766</td><td>52.582</td></tr><tr><td>CogAgent</td><td>4.16</td><td>9.42</td><td>11.13</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Qwen2.5-VL</td><td>2.90</td><td>6.97</td><td>8.72</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Ferret-UI</td><td>0.0</td><td>0.22</td><td>0.28</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Phi-3-V</td><td>0.450</td><td>2.727</td><td>4.189</td><td>10.423</td><td>25.264</td><td>31.316</td></tr><tr><td>DELOC</td><td>14.703</td><td>30.391</td><td>35.198</td><td>32.421</td><td>51.489</td><td>57.445</td></tr><tr><td colspan="7">Finetuned</td></tr><tr><td>DocEdit</td><td>36.500</td><td>-</td><td>-</td><td>34.340</td><td>-</td><td>-</td></tr><tr><td>DocEdit-v2</td><td>48.690</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Phi-3-V</td><td>26.567</td><td>48.524</td><td>54.371</td><td>34.523</td><td>57.183</td><td>62.272</td></tr><tr><td>DELOC</td><td>49.620</td><td>67.023</td><td>69.975</td><td>57.012</td><td>71.566</td><td>74.976</td></tr></table>
80
+
81
+ Table 2: Comparison of DELOC with state-of-the-art on DocEdit edit location (bbox) prediction. DELOC outperforms both existing Multimodal LLMs in zeroshot setting and specialized models in finetuned setting. $A@X$ denotes Accuracy at IoU of $X$ .
82
+
83
+ <table><tr><td>Model</td><td>A@0.5</td></tr><tr><td>DELOC</td><td>6.297</td></tr><tr><td>- w/o CODE-NLI Filtering</td><td>5.848</td></tr></table>
84
+
85
+ Table 3: Ablation of Code-NLI. The filtering step improves DELOC performance.
86
+
87
+ for validation. More data statistics in Appendix D.
88
+
89
+ For instruction-tuning Phi-3V (Abdin and Team, 2024) is used as it's small and thus easy to train. We use a learning rate of 5e-5, weight decay of 0.12, AdamW optimizer, and a linear scheduler with a warmup. It takes approximately 10 hours to train the model on 128 Nvidia A100-80 GiB GPUs. More architecture and training details are in Appendix E and Appendix F.
90
+
91
+ We evaluate our proposed approach on the test set of the DocEdit dataset (Mathur et al., 2023). The test is split into two subsets: PDFs, which are more verbose, and Designs, which are graphical. We use the originally proposed metric of calculating accuracy by evaluating if the Intersection Over Union (IoU) between predicted bbox and groundtruth bbox is greater than a threshold. For baselines, we select state-of-the-art MLLMs that understand text well from images. Consequently, we select GPT-4o, Phi3-V, CogAgent, Ferret-UI, Qwen2.5-VL (Bai et al., 2025) and Fuyu (Bavishi et al., 2023). We also consider as baselines specialized models - DocEdit v1 and v2 - that are finetuned on DocEdit. We compare DELOC against these baselines in two settings: zero-shot - where DELOC is not finetuned on the DocEdit train set, and finetuned - where it is.
92
+
93
+ Our main results are summarized in Table 2. We make the following observations: 1) DELOC outperforms both open-source and proprietary closed
94
+
95
+ ![](images/346a7c830ca3cd72211ea5d8dd72aab6483a319921ba7abd1e077baed3bca2b5.jpg)
96
+ Figure 4: Datasize Ablation: DELOC's performance improves with data.
97
+
98
+ ![](images/049356e43ffae248e45e761dc8d937f584b751c1303a1426d4329383dc78a6d1.jpg)
99
+ Figure 5: Qualitative sample prediction from DELOC.
100
+
101
+ source MLLM by $\sim 3\mathrm{x}$ on the PDF subset and $\sim 12\%$ on the Design subset. 2) DELOC outperforms all specialized finetuned models on both subsets. 3) DELOC improves over its backbone MLLM, Phi-3-V by $28\mathrm{x}$ on PDFs and $3\mathrm{x}$ on Designs, underscoring the significance of our synthetic instruction tuning. 4) The best existing zero-shot MLLM performance is at best $5\%$ , indicating grounding in PDF is challenging. 5) All MLLM perform much better on the Design subset. This is understandable given all MLLM have been pretrained on natural images, which are closer in domain to Designs than to verbose PDFs. Notably, we found Fuyu to be quite bad; hence we do not include it in Table 2. We also don't compare against Agent-DocEdit as it includes an additional post-processing step that assumes availability of ground-truthbbox for all elements in a PDF.
102
+
103
+ Our ablation on Code-NLI in Table 3 indicates the importance of data cleaning to our approach. We also manually evaluate the quality of Code-NLI on 25 samples ourselves. Our findings are: 1) Precision is $84\%$ : This implies that most of the data that Code-NLI predicts as correct is actually correct; only $16\%$ incorrect data passes through the Code-NLI filter. 2) Recall is $63.64\%$ : This
104
+
105
+ implies it rejects a decent amount of good data $(36\%)$ . This loss is acceptable for our use case since we generate a very large amount of synthetic data. The important consideration for us is that incorrect data should not be flagged as correct – which Code-NLI does (84% precision).
106
+
107
+ Figure 4 demonstrates that performance of DELOC improves with data size. We also provide a qualitative example in Figure 5 that shows 1) DELOC acquires effective capability of verbose PDF grounding. 2) Ambiguity in the expected response can lead to divergence from ground-truth. More examples in Appendix G.
108
+
109
+ # 5 Conclusion
110
+
111
+ In this work, we propose DELOC, a system to ground PDF edit request spatially. DELOC leverages spatial and hierarchical structure of PDFs to generate 3M synthetic instruction tuning data that successfully adapts existing MLLM for PDF grounding. Our results demonstrate that DELOC outperforms both existing MLLMs, including GPT-40, and specialized models on DocEdit.
112
+
113
+ # Limitations
114
+
115
+ We attempted to cover a comprehensive list of PDF elements in our synthetically generated data. This coverage is mostly dependent on the element annotation present in PDF analysis datasets we began with to create our hierarchical representation. As such, there could be some PDF elements which are not covered. We leave for future work the addition of more PDF analysis datasets to increase element coverage. Furthermore, this method (like most grounding MLLM approaches) generates bounding boxes using auto-regressive prediction. The loss used assigns equal weight to all predictions not exactly same as ground-truthbbox, irrespective of their proximity to the ground-truthbbox. Ideally, the loss should be higher for a predictedbbox that is farther from the ground-truth than a predicted box that is closer to ground-truth. Reinforcement learning based preference optimization could be a way to address this issue in future works.
116
+
117
+ # References
118
+
119
+ Marah Abdin and The Phi-3 Team. 2024. Phi-3 technical report: A highly capable language model locally on your phone. Preprint, arXiv:2404.14219.
120
+ Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang,
121
+
122
+ Shijie Wang, Jun Tang, Humen Zhong, Yanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. 2025. Qwen2.5-vl technical report. Preprint, arXiv:2502.13923.
123
+ Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Săgnak Taşürlar. 2023. Introducing our multimodal models.
124
+ Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
125
+ Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. 2023. Shikra: Unleashing multimodal llm's referential dialogue magic. arXiv preprint arXiv:2306.15195.
126
+ Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Songyang Zhang, Haodong Duan, Wenwei Zhang, Yining Li, Hang Yan, Yang Gao, Zhe Chen, Xinyue Zhang, Wei Li, Jingwen Li, Wenhai Wang, Kai Chen, Conghui He, Xingcheng Zhang, Jifeng Dai, Yu Qiao, Dahua Lin, and Jiaqi Wang. 2024. Internlm-xcomposer2-4khd: A pioneering large vision-language model handling resolutions from 336 pixels to 4k hd. Preprint, arXiv:2404.06512.
127
+ Aaron Grattafori and The LLaMA Team. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783.
128
+ Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan Wang, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Co-gagent: A visual language model for gui agents. Preprint, arXiv:2312.08914.
129
+ Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. 2024. Lisa: Reasoning segmentation via large language model. Preprint, arXiv:2308.00692.
130
+ Lianian Harold Li*, Pengchuan Zhang*, Haotian Zhang*, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. 2022. Grounded language-image pre-training. In CVPR.
131
+ Junpeng Liu, Tianyue Ou, Yifan Song, Yuxiao Qu, Wai Lam, Chenyan Xiong, Wenhu Chen, Graham Neubig, and Xiang Yue. 2024. Harnessing webpage uis for text-rich visual understanding. Preprint, arXiv:2410.13824.
132
+ Tengchao Lv, Yupan Huang, Jingye Chen, Lei Cui, Shuming Ma, Yaoyao Chang, Shaohan Huang, Wenhui Wang, Li Dong, Weiyao Luo, et al. 2023.
133
+
134
+ Kosmos-2.5: A multimodal literate model. arXiv preprint arXiv:2309.11419.
135
+ Chuofan Ma, Yi Jiang, Jiannan Wu, Zehuan Yuan, and Xiaojuan Qi. 2024. Groma: Localized visual tokenization for grounding multimodal large language models. arXiv preprint arXiv:2404.13013.
136
+ Puneet Mathur, Rajiv Jain, Jiumiang Gu, Franck Dernoncourt, Dinesh Manocha, and Vlad Morariu. 2023. Docedit: Language-guided document editing. In Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI).
137
+ OpenAI. 2024. Gpt-4o system card. Preprint, arXiv:2410.21276.
138
+ Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar. 2022. Doclaynet: A large human-annotated dataset for documentlayout segmentation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, page 3743-3751. ACM.
139
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. Preprint, arXiv:2103.00020.
140
+ Manan Suri, Puneet Mathur, Franck Dernoncourt, Rajiv Jain, Vlad I Morariu, Ramit Sawhney, Preslav Nakov, and Dinesh Manocha. 2024. DocEdit-v2: Document structure editing via multimodal LLM grounding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15485-15505, Miami, Florida, USA. Association for Computational Linguistics.
141
+ Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Cogvlm: Visual expert for pretrained language models. Preprint, arXiv:2311.03079.
142
+ Te-Lin Wu, Rajiv Jain, Yufan Zhou, Puneet Mathur, and Vlad I Morariu. 2024. Agent-docedit: Language-instructed LLM agent for content-rich document editing. In First Conference on Language Modeling.
143
+ Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, and Yinfei Yang. 2023. Ferret: Refer and ground anything anywhere at any granularity. Preprint, arXiv:2310.07704.
144
+ Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, and Zhe Gan. 2024. Ferret-ui: Grounded mobile ui understanding with multimodal llms. Preprint, arXiv:2404.05719.
145
+
146
+ Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang, and Jianke Zhu. 2023. Osprey: Pixel understanding with visual instruction tuning. Preprint, arXiv:2312.10032.
147
+ Ao Zhang, Yuan Yao, Wei Ji, Zhiyuan Liu, and Tat-Seng Chua. 2023. Next-chat: An Imm for chat, detection and segmentation. Preprint, arXiv:2311.04498.
148
+ Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. Preprint, arXiv:1908.07836.
149
+
150
+ We provide additional details here for further clarification.
151
+
152
+ - Element Creation Heuristics (Appendix A)
153
+ - Synthetic Data Generation Prompt (Appendix B)
154
+ Code-NLI Prompt and Samples (Appendix C)
155
+ Data Statistics (Appendix D)
156
+ - Model Architecture (Appendix E)
157
+ Additional Training Details (Appendix F)
158
+ Additional Qualitative Examples (Appendix G)
159
+
160
+ ![](images/46a94597db36ae44aab255a15c85781bcea974a3a96ee472a3a25ffc18c9b672.jpg)
161
+ Figure 6: Sample output from using our heuristic to create lines and list-items.
162
+
163
+ # A Element Creation Heuristics
164
+
165
+ We require annotations for elements in PDFs to create the spatially-aware hierarchical representation. While an element annotation may be present in a dataset, its subelement or parent element may not have the requisite annotation available. We use heuristics to create annotations for such elements. To create lines within a paragraph, we take the bounding box of every word in the paragraph. All
166
+
167
+ the words with same y-coordinates are considered to be in the same line. Similarly, to create lists from lines, we compare the x-coordinates of the beginning of the lines. Outliers are considered to be the beginning of a new list item. We illustrate the output from such heuristics to create lines and list-items in Figure 6. While these are simple, they work quite well.
168
+
169
+ # B Synthetic Data Generation Prompt
170
+
171
+ Once we have a hierarchical representation of the PDF elements along with their corresponding bounding box annotations, we feed them into an LLM to generate the instruction tuning data. Apart from the PDF elements, we input a system prompt containing the guidelines for generating an instruction tuning set and a one-shot example of how the user system conversation should be formatted. The process is illustrated in Figure 7.
172
+
173
+ The system prompt contains guidelines describing the task, input, and expected output. It contains instructions to generate diverse and compositional edit requests, to only utilize the information in the given hierarchical PDF representation, to not hallucinate, and so on. In addition, we provide a one-shot example to the LLM to further illustrate our requirements and guide it to follow expected format.
174
+
175
+ We provide the system prompt below:
176
+
177
+ You are an AI visual assistance that can analyze PDFs. You will receive information describing a pdf, where each pdf element (text, paragraph, header, footer, chart etc.) detection is represented in a json format. Each element is denoted by its type, followed by its bounding box coordinates, followed by its value and/or its subelements. Bounding box coordinates are represented as $(x1, y1, x2, y2)$ . These values correspond to the top left $x$ , top left $y$ , bottom right $x$ , and bottom right $y$ . The subelements are formatted and indented with spaces. The content of a pdf element is found at the 'value' key. The pdf elements are organized in the order in which they are present in the pdf: top to bottom, left to right. The pdfs can be single column, double column or multiple columns. Judge the number of columns in the pdfs by looking at the relative positioning of the pdf elements' $x1$ .
178
+
179
+ Using the provided texts and coordinates, design question and answer pairs simulating the interactions between a user and system. Conversations should focus on potential user edit requests on the pdf elements (vs perception). Please follow the guidelines below while generating the conversations:
180
+
181
+ 1. The edit requests should be diverse, requiring different kind of editing like adding, deleting, changing, modifying, swapping, moving, replacing, merging, splitting, and so on.
182
+ 2. Please refer to the pdf-element that needs editing in diverse compositional ways. For example, $\backslash$ delete last line of the third
183
+
184
+ # System Prompt
185
+
186
+ You are an AI visual assistance that can analyze PDFs. You will receive information describing a pdf, where each pdf element (text, paragraph, header, footer, chart etc.) detection is represented in a json format. Each element is denoted by its type, followed by its bounding box coordinates, followed by its value and/or its subelements. Bounding box coordinates are represented as (x1, y1, x2, y2). These values correspond to the top left $x$ , top left $y$ , bottom right $x$ , and bottom right $y$ . The subelements are formatted and indented with spaces. The content of a pdf element is found at the 'value' key. The pdf elements are organized in the order in which they are present in the pdf: top to bottom, left to right. The pdfs can be single column, double column or multiple columns. Judge the number of columns in the pdfs by looking at the relative positioning of the pdf elements' $x_1$ . Using the provided texts and coordinates, design question and answer pairs simulating the interactions between a user and system. Conversations should focus on potential user edit requests on the pdf elements (vs perception).
187
+
188
+ Please follow the guidelines below while generating the conversations:
189
+
190
+ 1. The edit requests should be diverse, requiring different kind of editing like adding, deleting, changing, modifying, swapping, moving, replacing, merging, splitting, and so on.
191
+ 2. Please refer to the pdf-element that needs editing in diverse compositional ways. For example, "delete last line of the third paragraph" or "move the table in between the two figures from top to bottom".
192
+
193
+ # One Shot Example
194
+
195
+ <user>
196
+
197
+ element':text
198
+
199
+ 'bbox': [84, 382, 484, 413],
200
+
201
+ subelements: [element: line,
202
+
203
+ 'value' :interactions were then functionally annotated for
204
+ value : Interactions were the
205
+
206
+ element: line
207
+
208
+ "Bbox:[84,397,377,414]
209
+
210
+ 'value': 'functional processes and pathways involved.'].
211
+
212
+ element':text'
213
+
214
+ 'bbox': [84, 410, 484, 498],
215
+
216
+ ents': ['element': 'line',
217
+
218
+ 'bbox': [109, 411, 484, 428],...
219
+
220
+ # system>
221
+
222
+ User: Locate where changes are required to perform this query, 'Move the top figure from the center of the page to left'.
223
+ System: The figure that is required to move is located at [241, 88, 756, 323].
224
+
225
+ User: Find the specific location requiring changes to perform this request. "Underline the title." Results and Discussion
226
+ System: The title "Results and Discussion" can be found at [513, 675, 721, 691].
227
+
228
+ # Detections
229
+
230
+ 'eleme
231
+
232
+ 'bbox': [76, 76, 179, 95],
233
+
234
+ 'subele
235
+
236
+ 'element': 'line'
237
+
238
+ 'bbox': [76, 76, 179, 94],
239
+
240
+ value':AppendDIX}\}},
241
+
242
+ 'elem
243
+
244
+ 'bbox': [76, 123, 489, 183]
245
+
246
+ 'subelements': I
247
+
248
+ element: line.
249
+ 'bbox':[76,122,485,139]
250
+ 'value': 'As stated in Section 2.1, the...
251
+
252
+ User: Locate where changes are required to perform this query, 'Add "Introduction" as a new title before "Appendix".
253
+ System: The location where "Appendix" is found is at [76, 76, 179, 95].
254
+ User: Find the specific location requiring changes to perform this request, 'Delete the third list item under "A.1.Environment":
255
+ System: The third list item under "A.1.Environment" can be found at [92, 398, 485, 459].
256
+ User: Find the place that needs to be edited to
257
+ process this query, 'Move the list under "A.1.4."
258
+ "Brain-Body Interface" to the bottom of the page!
259
+ System: The list under "A.1.4. Brain-Body Interface" can be found at [514, 175, 926, 531].
260
+ User: Please find the area that needs editing to carry out this user request, 'Merge the two list items under "A.1.5. Experiment".
261
+ System: The two list items under "A.1.5. Experiment" can be found at [530, 568, 923, 658] and [530, 657, 923, 806].
262
+ User: Please locate the part that needs modification to process this query, "Swap the order of the first two list items under "A.1. Design and Editing".
263
+ System: The first two list items under "A.1. Design and Editing" can be found at [92, 236, 485, 341] and [92, 339, 485, 400].
264
+
265
+ Figure 7: Illustration of LLM prompting to generate synthetic instruction tuning data.
266
+
267
+ paragraph\" or \\" move the table in between the two figures from top to bottom \".
268
+
269
+ 3. Please emphasize edit requests that deal with more abstract pdf elements rather low level elements like words and lines.
270
+ 4. Use the x and y coordinates of the pdf-elements to figure out the relative spatial relations left of, right of.
271
+ top, bottom or to assess which paragraph is first, second or third.
272
+ 5. Please count the number and order of pdf elements sequentially and correctly. Mistakes in counting are costly. Please think step by step while doing it.
273
+ 6. Please do not generate edit requests that implies modifying multiple pdf elements when there exists actually only one, for example, asking to swap the order of list-items when acutely there is a single list item.
274
+ 7. Please do not generate edits that call for left/right column, where there exists only a single column.
275
+ 8. In the presence of list, please generate at least one edit requests on list bullets. The output then should also refer only to the bullets, not the whole list. Check to see whether the bullets are numeric, alphabetic, or plain circular bullets before requesting its editing.
276
+ 9. Please geenrate the minimal bounding box that suffices to make the edit, i.e. ground the user request to the most fine-grained pdf-element.
277
+ 10. Please generate multiple back and forth conversations. The user query should follow this format: "<phrase requesting the location>, \"<user edit query>\". For example: 'Please find the location to make this edit, \ "Change list bullets to numeric\".
278
+ 11. Only generate questions and answers that can be answered with the given information, do not use background information.
279
+ 12. Please do not hallucinate pdf elements, its content, its subelements or edit requests.
280
+ 13. It's better to not generate requests instead of generating wrong requests.
281
+
282
+ # C Code-NLI Prompt and Samples
283
+
284
+ To clean noisy synthetic data, we employ Code-NLI. The prompt used for Code-NLI includes a system prompt, that contains Python API function definitions, and few-shot prompts. We provide the system prompt given below:
285
+
286
+ You are an AI visual assistant that can do Natural Language Inference using python code. You will receive information describing a hypothesis. This includes a user edit request on pdf and a system output specifying the edit location with a bounding box. Your job is to create a python code, using listed APIs, that verifies that the bounding box correctly localizes the edit target in the user request. Each statement in the python code needs to be true to verify the correctness of the hypothesis.
287
+ The user edit request focuses on editing different type of pdf elements: title, paragraph, list, figure, and table. The elements could have subelements: title -> {bullets -> {bullet}, list_item -> {line}}, paragraph -> {line}, title -> {line}. It refers to different elements in a compositional and hierarchical manner. For example, last line of second paragraph, first list-item of third list, etc. The system output localizes the element/subelement requiring edit with a bounding box. The format of the bounding box is [x1,y1,x2,y2] where (x1,y1) is the coordinate of top left corner and (x2,y2) is bottom right corner, where x2>x1 and y2>y1.
288
+ Please use the following APIs to verify that the output bounding box indeed correctly captures the edit element/subelement. Please output only the python code using the listed APIs and nothing else. The element (dict) follows this format: {element_type: <element_type>,_bbox: [<x1,y1,x2,y2>]}, value: <value>, subelements:[<sub>subelement1,..]}.
289
+ def is_element.element_type: str, bbox: list) -> bool:
290
+ "checks whether the element in_bbox is element_type"
291
+ "Args: element_type (str): any element type ' title', 'paragraph', 'list', 'line', 'figure', 'table', 'bullets', '
292
+
293
+ ```python
294
+ def in_sectionsection section_title: str, bbox: list) -> bool:
295
+ '''checks whether the bbox is in the section_title'''
296
+ '''Args: section_title (str): the string contained in the title
297
+ bbox (list): a bounding box of the type [ x1, y1, x2, y2]
298
+ '''
299
+ def contains_string (string: str,_bbox: list) -> bool:
300
+ '''checks whether the element in_bbox contains string'''
301
+ '''Args: string (str): string to check
302
+ _bbox (list): a bounding box of the type [ x1, y1, x2, y2]
303
+ '''
304
+ def check_element_order_from_top_or_title( top_or_title (str), element_type: str,_bbox:
305
+ list, order: int, column: str = None) -> bool:
306
+ '''checks the sequence order of the element in
307
+ _bbox of element_type from top of pdf or the section title where it lies'''
308
+ '''Args:
309
+ top_or_title (str): Either 'top' or 'title' from where to beign counting
310
+ element_type (str): First hierarchy elements -- 'title', 'paragraph', 'list', 'figure' or 'table'
311
+ _bbox (list): a bounding box of the type [ x1, y1, x2, y2]
312
+ order (int): denoting the sequence order of bounding box of element_type. 1,2,3 order counts from top, -1,-2.. order counts from last
313
+ column (str): to check the order in left or right column. Defaults to None when pdf is single column.
314
+ */
315
+ def check_subelement_order_from_top_or_title( top_or_title (str), subelement_type: str,_bbox:
316
+ list, order: int, column: str = None) -> int:
317
+ '''checks the sequence order of the subelement in bounding box of subelement_type from top of pdf or the section title where it lies'''
318
+ '''Args:
319
+ top_or_title (str): Either 'top' or 'title' from where to beign counting
320
+ subelement_type (str): Second hierarchy elements -- 'line' (title and paragraph subelement), \ 'bullets' (list subelement), '\list_item' (list subelement)
321
+ _bbox (list): a bounding box of the type [ x1, y1, x2, y2]
322
+ order (int): denoting the sequence order of bounding box of subelement_type. 1,2,3 order counts from top, -1,-2.. order counts from last
323
+ column (str): to check the order in left or right column. Defaults to None when pdf is single column.
324
+ */
325
+ def check_subelement_order_in_element( subelement_type: str,_bbox: list, order: int) -> int:
326
+ '''checks the sequence order of the subelement in bounding box of subelement_type within the element in which_bbox lies'''
327
+ '''Args:
328
+ subelement_type (str): Second hierarchy elements -- 'line' (title and paragraph subelement), 'bullets' (list subelement), 'list_item' (list subelement)
329
+ _bbox (list): a bounding box of the type [
330
+ ```
331
+
332
+ ```python
333
+ x1, y1, x2, y2]
334
+ order (int): denoting the sequence order of_bbox of subelement_type. 1,2,3
335
+ order counts from top, -1,-2.. order counts from last
336
+ " "
337
+ def check_subsubelement_order_from_top_or_title(top_or_title (str), subsublement_type: str,_bbox: list, order: int, column: str = None) -> int:
338
+ " checks the sequence order of the subsublement in_bbox of subsublement_type from top of pdf or the section title where it lies"
339
+ "Args:
340
+ top_or_title (str): Either 'top' or 'title' from where to beign counting subsublement_type (str): Third hierarchy elements -- 'bullet' (subsublement of bullets), 'line' (subsublement of list_item)
341
+ bbox (list): a bounding box of the type [x1, y1, x2, y2]
342
+ order (int): denoting the sequence order of_bbox of subelement_type. 1,2,3 order counts from top, -1,-2.. order counts from last column (str): to check the order in left or right column. Defaults to None when pdf is single column.
343
+ " "
344
+ def check_subsubelement_order_in_subelement(subsublement_type: str,_bbox: list, order: int) -> int:
345
+ " checks the sequence order of the subsublement in_bbox of subsublement_type within the subelement in which_bbox lies"
346
+ "Args:
347
+ subsublement_type (str): Third hierarchy elements -- 'bullet' (subsublement of bullets), 'line' (subsublement of list_item)
348
+ bbox (list): a bounding box of the type [x1, y1, x2, y2]
349
+ order (int): denoting the sequence order of_bbox of subsublement_type. 1,2,3 order counts from top, -1,-2.. order counts from last
350
+ ```
351
+
352
+ We also include some sample Python code generated by Code-NLI for verification in Figure 8.
353
+
354
+ # D Dataset Statistics
355
+
356
+ The total size of the synthetically generated data is 3M samples. The topic range is the same as the topic distribution of the base datasets: scientific articles, patents, finance, tenders, laws, and manuals. Similarly, image resolution is the same as base datasets: $1025 \times 1025$ for DocLayNet and $800 \times 600$ for PubLayNet.
357
+
358
+ # E Model Architecture
359
+
360
+ ) Our model, DELOC, is based on Phi-3-V model. The Phi-3-V model's architecture is essentially composed of two components - an image encoder (CLIP ViT-L/14 (Radford et al., 2021)) and a transformer decoder (phi-3-mini). The visual tokens from the image encoder and concatenate with text
361
+
362
+ User: Please locate the part that needs to be edited to process this query, 'Delete the last line of the last paragraph in the left column'.
363
+
364
+ System: The last paragraph in the left column is located at [84, 654, 484, 813] and the last line is at [84, 797, 140, 814].
365
+
366
+ ![](images/a7ca97b45a5e4edf62988dd8e494e6dcc3ce4a209cc0416d0e09faa1672a2d20.jpg)
367
+
368
+ is_element('list_item', [92,398,485,459])
369
+ in_section('A.1.Environment', [92,398,485,459])
370
+ check_subelement_order_in_element('list_item', [92,398,485,459], 2)
371
+
372
+ User: Please find the location that needs to be edited to process this query, 'Delete the reference "[23]" from the second paragraph in the left column'.
373
+
374
+ System: The second paragraph in the left column is located at [84, 410, 484, 498] and the reference "[23]" is at [84, 482, 167, 499].
375
+
376
+ ![](images/6937a613b7a1c3104cba624cf2eed197664f770b1ee7fcbcd80f1db17754f3be.jpg)
377
+
378
+ is_element('paragraph', [84, 410, 484, 498])
379
+ check_element_order_from_top_or_title('top', 'paragraph', [84, 410, 484, 498], 2, column='left')
380
+ contains_string(['[23]', [84, 482, 167, 499])
381
+
382
+ Figure 8: Sample verification code generated by Code-NLI.
383
+ ![](images/aeac7283b8e342b7489f868ad18d6a6956cb6f467918627ef010a991abdef461.jpg)
384
+ The length of the table "Exhibit 2" is increased. The spread of the table is increased towards the left margin..
385
+
386
+ Figure 9: Additional Qualitative sample prediction from DELOC demonstrating its capabilities.
387
+ ![](images/a97ad259172bccbbc8ad15338ef927fa6b6241ab34fece8369cc737a9f929eed.jpg)
388
+ Moved the text "Golden Chopstick, Takeaway menu" above the purple line
389
+
390
+ ![](images/00d5d6cd15d5ff3ebc3611e090cd1b138088bd89b6b1835a0a05884bb0263566.jpg)
391
+ GT: Pred:
392
+
393
+ tokens in an interleaved manner and fed to the transformer decoder to output the prediction. In total, the number of parameters totals 4.2B.
394
+
395
+ The Phi-3-mini is a transformer decoder with 3072 hidden dimension, 32 heads and 32 layers.
396
+
397
+ # F Additional Training Details
398
+
399
+ The input image to our model is resized to $900 \times 900$ . The text-heavy nature of PDF images requires that the model handle image resolutions dynamically according to the resolution of the text. Higher resolution text should result in higher image input resolution, and vice versa. This is taken care of by the dynamic cropping strategy (Dong et al., 2024) in the Phi-3-V model. It allows the input image to be dynamically cropped into a 2d array of blocks, which are then concatenated to represent the whole
400
+
401
+ image.
402
+
403
+ # G Additional Qualitative Examples
404
+
405
+ We add more qualitative samples to further illustrate the capabilities of DELOC. As can be seen in Figure 9, DELOC is able to ground user edit requests well both for PDFs and design documents.
2025/DELOC_ Document Element Localizer/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:144ddf497c15f876656ffd4eacc04874d84498807285cf8eae63e535bf23aa4a
3
+ size 562816
2025/DELOC_ Document Element Localizer/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/53d4ebbb-1b33-4bbd-9f27-62b7ab1bc458_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/53d4ebbb-1b33-4bbd-9f27-62b7ab1bc458_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/53d4ebbb-1b33-4bbd-9f27-62b7ab1bc458_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abcd379784a07b3876796eb4d7430ce609ef98a1ca942a3c58e962c8a1b61724
3
+ size 1041735
2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/full.md ADDED
@@ -0,0 +1,1269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DICE: Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction
2
+
3
+ Yiqi Li $^{1}$ , Yusheng Liao $^{1,2}$ , Zhe Chen $^{1,2}$ , Yanfeng Wang $^{1,2}$ , Yu Wang $^{1,2*}$
4
+
5
+ $^{1}$ Shanghai Jiao Tong University,
6
+
7
+ $^{2}$ Shanghai Artificial Intelligence Laboratory
8
+
9
+ {17-adamant, liao20160907, chenzhe2018, wangyanfeng, yuwangsjtu}@sjtu.edu.cn
10
+
11
+ # Abstract
12
+
13
+ When performing reasoning tasks with user-specific requirements, such as strict output formats, large language models (LLMs) often prioritize reasoning over adherence to detailed instructions. Fine-tuning LLMs on supervised datasets to address this is impractical due to high computational costs and limited parameter access. To tackle this, we propose DICE, a lightweight framework that guides small language models (SLMs) to refine LLMs' outputs through chain-of-thought (CoT) correction. DICE decouples the process by first prompting LLMs to generate natural language responses, then using trained SLMs to analyze and refine these outputs to meet structured output specifications. This framework preserves LLMs' broad knowledge and reasoning capabilities while ensuring the outputs conform to user demands. Specifically, DICE first constructs structured CoT adaptation datasets via a two-stage method and subsequently applies a dual-tuning strategy to fine-tune SLMs for generating structured outputs in an analyze-then-answer pattern.1 Experiments demonstrate that DICE improves the average format accuracy and content correctness of LLM outputs by $35.4\%$ and $29.4\%$ , respectively, achieving state-of-the-art (SOTA) performance over other competitive baselines.
14
+
15
+ # 1 Introduction
16
+
17
+ Large language models (LLMs) have demonstrated significant advancements across diverse natural language processing (NLP) tasks, exhibiting exceptional capabilities in language comprehension and reasoning (Guo et al., 2025; Yang et al., 2024; Grattafori et al., 2024; Team et al., 2024; Chen et al., 2025b; Hurst et al., 2024). Their ability to follow general instructions is crucial in practical
18
+
19
+ ![](images/bbdaec01b441cd5ab0433d700ca701476d25e52a09c8dc76ffb786277a63c77a.jpg)
20
+ Figure 1: Structured format accuracy and unstructured output accuracy across model sizes on MATH. The models are required to generate structured output given 2-shot prompts. The bars represent content accuracy of unstructured natural language outputs, and the lines denote the format accuracy of structured outputs. More details about formats are in Appendix B.
21
+
22
+ scenarios, such as complex decision-making, scientific research, and automated problem solving (Ouyang et al., 2022; Qin et al., 2024; Zhao et al., 2025). However, this instruction-following ability tends to degrade when LLMs are applied to challenging reasoning tasks (Tam et al., 2024; Shorten et al., 2024; Chen et al., 2025a), which restricts the broader application in reasoning-intensive tasks.
23
+
24
+ While scaling up LLMs can enhance their reasoning capacity (Kaplan et al., 2020; Zhong et al., 2021; Naveed et al., 2024), we observe a counterintuitive trade-off: larger models sometimes exhibit weaker adherence to user-specific instructions compared to smaller counterparts, even when their underlying reasoning is correct. Our preliminary experiments (Figure 1) provide empirical evidence of this trade-off, focusing on user instructions related to specific output formats. It is observed that format accuracy peaks at mid-scale models while declining in larger models, despite their superior reasoning performance. Specifically, large-scale
25
+
26
+ ![](images/dcdfd9df89cfd00c792915dbac7ed51d5c4afa075ed64db6c46a731b9eec3b4a.jpg)
27
+ Figure 2: Overview of DICE framework. The training process comprises two sequential phases: DICE first employs a two-stage strategy to construct structured chain-of-thought data and subsequently implements a dual-tuning methodology to optimize the SLM to enforce rigorous format compliance. During inference, the trained SLM systematically analyzes and refines the natural language outputs from the LLM.
28
+
29
+ models (e.g., 32B and 72B parameters) optimized for diverse tasks tend to allocate more attention to solving difficult problems, but often at the expense of strict adherence to output formatting instructions. The fragility of structured outputs exacerbates this issue—minor deviations (e.g., a misplaced bracket in JSON) can lead to complete parsing failures, disproportionately penalizing larger models during evaluation.
30
+
31
+ A natural solution is to fine-tune LLMs on the supervised dataset, but it is associated with several critical challenges: (1) Inefficiency: fine-tuning LLMs typically requires prohibitive computational resources and extended training duration; (2) Alignment Tax: task-specific fine-tuning risks catastrophic forgetting, which can inadvertently lead to performance degradation (Ouyang et al., 2022; Bubeck et al., 2023; Jiang et al., 2024). (3) Impracticality: for many API-only LLMs such as GPT-4 (Achiam et al., 2023), fine-tuning is infeasible due to inaccessible of model parameters. Recent studies have investigated collaborative frameworks that utilize small language models (SLMs) to effectively adapt LLMs to domain-specific tasks. Some methods use the probability distribution shift of SLM during fine-tuning to calibrate the LLM outputs (Liu et al., 2024; Ormazabal et al., 2023), while others employ model collaboration techniques to facilitate multi-step reasoning generation and path search (Sun et al., 2024; Fan et al., 2025; Kim et al., 2025; Zheng et al., 2025). Additionally, several
32
+
33
+ studies fine-tune SLMs to learn the correctional residuals between the ground-truth and the LLM-generated answers (Ji et al., 2024; Kim et al., 2024; Chen et al., 2024). However, these methods originally focus on enhancing the LLM's reasoning performance while overlooking its capability to follow instructions. Moreover, they fail to fully exploit the information embedded in the outputs of LLM, resulting in a high mis-correction rate. Thus, even when applied to structured reasoning tasks, these methods fail to adequately balance format output and reasoning performance.
34
+
35
+ To address these limitations, we propose a framework that adapts LLMs to structured reasoning tasks by guiDing SLMs to thInk with Chain-of-thought corrEction (DICE), as illustrated in Figure 2. The framework operates in two stages: first, the LLM is prompted to produce unstructured natural language responses, avoiding interference from complex formatting requirements that could degrade reasoning quality. Then, a trained SLM is deployed to refine these outputs into specific formats. To train the SLM, we employ a two-stage process to generate rationales and construct structured chain-of-thought (CoT) adaptation datasets, followed by a dual-tuning strategy that guides the SLMs to perform deep analysis on LLM outputs before generating final answers. The core innovation of DICE lies in our novel use of the model collaboration framework to enhance the instruction-following capabilities of LLMs and the analyze
36
+
37
+ then-answer pattern used in SLMs generation. By utilizing chain-of-thought prompting to stimulate the reasoning ability of SLMs (Wei et al., 2023; Lyu et al., 2023; Srivastava et al., 2025), they are able to improve instruction-following ability without compromising the inherent reasoning performance of LLMs, thereby addressing the mis-correction issues observed in prior approaches.
38
+
39
+ We conduct extensive experiments on five reasoning benchmarks to validate DICE's effectiveness in adapting LLMs to downstream structured tasks. Compared to LLM with a 2-shot prompt, DICE achieves significant improvements, demonstrating average gains of $35.4\%$ in format accuracy and $29.4\%$ in content accuracy. Moreover, DICE consistently outperforms other baselines across nearly all evaluated datasets. Our key contributions can be summarized as follows:
40
+
41
+ - To the best of our knowledge, we are the first to identify the negative correlation between instruction-following ability and model scale in reasoning tasks: while larger models exhibit stronger reasoning capabilities, their adherence to instructions tends to decline.
42
+ - We introduce DICE, a lightweight framework that leverages SLM to adapt LLMs to structured reasoning tasks. DICE operates without modifying the LLMs' parameters, thereby circumventing the "alignment tax" associated with fine-tuning and preserving the LLMs' general knowledge.
43
+ - Extensive experiments show that DICE outperforms other baselines in improving instruction-following capabilities in reasoning tasks without compromising reasoning performance. Furthermore, DICE demonstrates superior generalizability across datasets and models, making it applicable in a wider range of scenarios.
44
+
45
+ # 2 Related Work
46
+
47
+ # 2.1 Instruction-Following Ability of LLMs
48
+
49
+ The capability of LLMs to follow user instructions is critical for practical applications. In recent years, numerous studies on model instruction-following have emerged. IFEval (Zhou et al., 2023) and CIF-Bench (LI et al., 2024) contain various instructions for evaluating the general instruction-following proficiency. FOFO (Xia et al., 2024) and
50
+
51
+ StructuredRAG (Shorten et al., 2024) specifically target format compliance evaluation. Specifically, for tasks requiring structured outputs, constrained decoding-based methods (Willard and Louf, 2023; Koo et al., 2024; Dong et al., 2025) have been proposed to enforce models to generate responses in specific formats. However, such methods suffer from poor flexibility and simultaneously degrade the quality of model-generated content (Tam et al., 2024). To solve these problems, we first propose reasoning tasks with specific output formats to simultaneously evaluate model instruction-following and reasoning abilities. Secondly, we introduce a model collaboration-based approach that enhances the model's instruction-following ability while improving its reasoning performance.
52
+
53
+ # 2.2 Collaboration of SLMs and LLMs
54
+
55
+ Due to prohibitively high computational costs and inaccessibility to model parameters, direct finetuning of LLMs remains infeasible for most researchers. This challenge has driven extensive exploration into LLM and SLM collaborative frameworks for task-specific adaptation. Distribution alignment approaches (Liu et al., 2024; Ormazabal et al., 2023) attempt to integrate the output distribution shifts during SLM fine-tuning with LLM output distributions, but their practical applicability is circumscribed by the inaccessibility of full vocabulary probability distributions of various LLMs. Routing-based mechanisms (Sun et al., 2024; Aggarwal et al., 2024; Fan et al., 2025) decompose tasks into multi-step reasoning processes, where SLM selects optimal paths on multiple responses generated by LLM or dynamically selects between SLM and LLM during inference. However, these methods lead to increased computational cost and latency due to repeated LLM invocations. SLM correction frameworks (Kim et al., 2024; Ji et al., 2024; Kim et al., 2025) train SLM to learn the residual between the LLM output and target answer, but existing methods suffer from high mis-correction rates due to insufficient analysis and utilization of LLM output. When applied to structured reasoning tasks, these challenges persistently undermine the model's ability to maintain structured output fidelity while ensuring reasoning accuracy in complex questions. To address these limitations, this work constructs structured chain-of-thought adaptation benchmarks and leverages the analyze-then-answer pattern to enhance the reasoning and correction capability of SLMs.
56
+
57
+ # 3 Method
58
+
59
+ In this section, we delve into the technical details of adapting LLMs to structured reasoning tasks by guiDing SLMs to thInk with Chain-of-thought corrEction (DICE). In Section 3.1, we present the two-stage construction process of the structured chain-of-thought adaptation dataset. Next, in Section 3.2, we introduce the dual-tuning strategy used to fine-tune the SLM. The overview of DICE is presented in Figure 2 and Algorithm 1.
60
+
61
+ Algorithm 1: Algorithm of DICE
62
+ Input: Pretrained LLM $\mathcal{M}$ , SLM $\pi_{\theta}$ , original dataset $\mathcal{D} = \{(x^i,y^i)\}_{i = 1}^N$ , format token $y_{f}$ , learning rate for SFT $\eta_{S}$ , learning rate for SFT $\eta_{G}$ training iteration for SFT $T_{S}$ , training iteration for GRPO $T_{G}$ // Structured $\mathcal{Q}$ construction
63
+ 1 $\mathcal{Q}\gets \emptyset$ .
64
+ 2 for $x^{i},y^{i}\in \mathcal{D}$ do
65
+ 3 $y_o^i\sim \mathcal{M}(\cdot |x^i)$
66
+ 4 $r_1^i,y_{s,1}^i\sim \pi_0(\cdot |x^i,y_o^i)$
67
+ 5 if $y_{s,1}^{i} = y^{i}$ then
68
+ 6 $\mathcal{Q}\leftarrow \mathcal{Q}\cup \{(x^i,y_o^i,y_f,r_1^i,y^i)\}$
69
+ 7 else
70
+ 8 $r_2^i,y_{s,2}^i\sim \pi_0(\cdot |x^i,y_o^i,y_i^i)$
71
+ 9 if $y_{s,2}^{i} = y^{i}$ then
72
+ 10 $\mathcal{Q}\leftarrow \mathcal{Q}\cup \{(x^i,y_o^i,y_f,r_2^i,y_i^i)\}$
73
+ 11 end
74
+ 12 end
75
+ 13 end // Fine-tuning the SLM
76
+ 14 for $t = 1$ to $T_S$ do
77
+ 15 $\mathcal{L}_{SFT}(\theta ,\mathcal{Q})\gets \mathbb{E}_{\mathcal{Q}}[\mathcal{L}_{SFT}(\theta)]$ (Eq. 5)
78
+ 16 $\theta \leftarrow \theta -\eta_S\nabla_\theta \mathcal{L}_{SFT}(\theta ,\mathcal{Q})$
79
+ 17 end
80
+ 18 for $t = 1$ to $T_G$ do
81
+ 19 $\mathcal{L}_{GRPO}(\theta ,\mathcal{Q})\gets \mathbb{E}_{\mathcal{Q}}[\mathcal{L}_{GRPO}(\theta)]$ (Eq. 9)
82
+ 20 $\theta \leftarrow \theta -\eta_S\nabla_\theta \mathcal{L}_{GRPO}(\theta ,\mathcal{Q})$
83
+ 21 end Output: Fine-tuned SLM $\pi_{\theta}$
84
+
85
+ # 3.1 Structured Chain-of-Thought Adaptation Dataset Construction
86
+
87
+ Given a pre-trained LLM $\mathcal{M}$ , pre-trained SLM $\pi_0$ , and question-answer pairs $\mathcal{D} = \{(x^i, y^i)\}_{i=1}^N$ from the original training set ( $N$ is the size of the benchmark), our approach begins by instructing the LLM $\mathcal{M}$ directly to obtain the natural language outputs:
88
+
89
+ $$
90
+ y _ {o} ^ {i} \sim \mathcal {M} (\cdot | x ^ {i}) \tag {1}
91
+ $$
92
+
93
+ To address the problem of high mis-correction rate in prior approaches, we construct analytical data based on $y_{o}$ , which can guide the SLM to reason before generating the final answers. To reduce computational cost and generate rationales
94
+
95
+ more suitable for SLM to learn, we propose a two-stage methodology, akin to STaR (Zelikman et al., 2022), to instruct the pre-trained SLM $\pi_0$ to reason the LLM output and provide the predicted answer. Considering the limited instruction-following capability of $\pi_0$ , we first generate two demonstrations with the assistance of LLM in the following two steps. In the first stage, the rationale $r_1^i$ and predicted answer $y_{s,1}^{i}$ are formulated as:
96
+
97
+ $$
98
+ \left(r _ {1} ^ {i}, y _ {s, 1} ^ {i}\right) \sim \pi_ {0} \left(\cdot | x ^ {i}, y _ {o} ^ {i}\right) \tag {2}
99
+ $$
100
+
101
+ Based on the assumption that rationales leading to correct predicted answers possess positive utility, we filter the generated outputs, retaining only those associated with accurate answers $(y_{s,1}^{i} = y^{i})$ . In the subsequent stage, for the filtered-out samples, where $\pi_0$ alone struggles to generate meaningful rationales, we append the answer label $y^{i}$ as a contextual hint to the original input of these challenging samples, then regenerate rationales and answers:
102
+
103
+ $$
104
+ \left(r _ {2} ^ {j}, y _ {s, 2} ^ {j}\right) \sim \pi_ {0} \left(\cdot \mid x ^ {j}, y _ {o} ^ {j}, y ^ {j}\right), \text {f o r} y _ {s, 1} ^ {j} \neq y ^ {j} \tag {3}
105
+ $$
106
+
107
+ After generation, we repeat the aforementioned filtering procedure. Empirical results from our experiments demonstrate that, following this two-stage approach, the SLM successfully generates rationales yielding correct predictions for over $90\%$ of the original training split. Consequently, we discard the filter-out samples in the second stage. The final rationale set can be formulated as $\mathcal{R} = \mathcal{R}_1 \cup \mathcal{R}_2 = \{r_1^i | y_{s,1}^i = y^i\}_{i=1}^{N_1} \cup \{r_2^j | y_{s,2}^j = y^j\}_{j=2}^{N_2}$ where $N_1$ and $N_2$ denote the number of samples retaining after each filtering procedure. Ultimately, we embed the rationales and answers into the required format (presented as $y_f$ ), obtaining the new training target $t^k = (y_f, r^k, y^k)$ . The final structured chain-of-thought adaptation dataset is $\mathcal{Q} = \{(x^k, y_o^k, t^k)\}_{k=1}^{N_1 + N_2}$ .
108
+
109
+ # 3.2 Guiding SLMs to Think Through a Dual-Tuning Strategy
110
+
111
+ The most straightforward fine-tuning approach is SFT, wherein the $\pi_0$ is trained to predict all target tokens with equal emphasis. The training objective is to minimize the cross-entropy loss:
112
+
113
+ $$
114
+ \mathcal {L} _ {S F T} (\theta) = - \log \pi_ {\theta} (t | x, y _ {o}) \tag {4}
115
+ $$
116
+
117
+ where $(x,y_{o},t)\sim \mathcal{Q}$ . The target $t$ consists of three components, allowing us to decompose the
118
+
119
+ loss function and gradient into three corresponding terms:
120
+
121
+ $$
122
+ \begin{array}{l} \mathcal {L} _ {S F T} (\theta) = - \log \pi_ {\theta} \left(y _ {f}, r, y \mid x, y _ {o}\right) \tag {5} \\ = \mathcal {L} _ {f} (\theta) + \mathcal {L} _ {r} (\theta) + \mathcal {L} _ {y} (\theta) \\ \end{array}
123
+ $$
124
+
125
+ $$
126
+ \nabla_ {\theta} \mathcal {L} _ {S F T} (\theta) = \nabla_ {\theta} \mathcal {L} _ {f} (\theta) + \nabla_ {\theta} \mathcal {L} _ {r} (\theta) + \nabla_ {\theta} \mathcal {L} _ {y} (\theta) \tag {6}
127
+ $$
128
+
129
+ In practice, these three components are intrinsically interwoven and vary in length across actual outputs, rendering separate computation infeasible. Moreover, the rationale $r$ typically constitutes the longest segment; for example, in the MATH dataset requiring XML output, the average token ratio between $y_{f}$ , $r$ , and $y$ is approximately $25:135:1$ . Consequently, during gradient computation, $|\nabla_{\theta}\mathcal{L}_r(\theta)|$ can significantly exceed $|\nabla_{\theta}\mathcal{L}_f(\theta)|$ and $|\nabla_{\theta}\mathcal{L}_y(\theta)|$ . This imbalance causes $\pi_0$ to prioritize minimizing $\mathcal{L}_r(\theta)$ during optimization, resulting in insufficient learning of $y_{f}$ and $y$ . Meanwhile, since $r$ is generated by $\pi_0$ itself, it primarily contains in-domain knowledge that provides limited additional information. In contrast, novel knowledge such as user instruction and task-specific information is primarily contained in $y_{f}$ and $y$ , which necessitate more focused learning.
130
+
131
+ To address this challenge, we propose a dual-tuning strategy to progressively optimize the SLM. First, we conduct SFT utilizing Low-Rank Adaptation (LoRA (Hu et al., 2022)) on $\pi_0$ to rapidly acquire format specifications and the analyze-then-answer generation pattern, obtaining $\pi_{SFT}$ . Subsequently, we employ a more granular fine-tuning method to further optimize $\pi_{SFT}$ using the GRPO (Shao et al., 2024) algorithm (detailed in Appendix C). For model output $\hat{t} = (\hat{y}_f,\hat{r},\hat{y})$ , we design reward functions that assign rewards solely based on $\hat{y}_f$ and $\hat{y}$ , neglecting $\hat{r}$ . The total reward is calculated as follows:
132
+
133
+ $$
134
+ \text {r e w a r d} = \left\{ \begin{array}{l l} 2, & \text {b o t h} \hat {y _ {f}} \text {a n d} \hat {y} \text {a r e c o r r e c t} \\ 1, & \text {o n e o f} \hat {y _ {f}} \text {a n d} \hat {y} \text {i s c o r r e c t} \\ 0, & \text {b o t h} \hat {y _ {f}} \text {a n d} \hat {y} \text {a r e i n c o r r e c t} \end{array} \right. \tag {7}
135
+ $$
136
+
137
+ # 4 Experiments
138
+
139
+ # 4.1 Experimental Setup
140
+
141
+ Datasets and Metrics We evaluate the effectiveness of DICE from four dimensions: mathematical reasoning (GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021)), commonsense reasoning (CommonsenseQA (Talmor et al., 2018), CSQA for
142
+
143
+ short), domain-specific reasoning (MedQA-zh (Jin et al., 2021)), and implicit reasoning (StrategyQA (Geva et al., 2021)). To comprehensively evaluate the model's reasoning and instruction-following ability, we restructure the output into a more sophisticated XML format. We employ two metrics to evaluate the quality of the outputs: Format Accuracy (F-Acc) and Content Accuracy (C-Acc). Format Accuracy assesses adherence to structural elements and keywords prescribed by the template. Content accuracy is derived from the Exact Match (EM) score calculated for the final answer extracted from outputs. Notably, format compliance is a necessary condition for content accuracy. More details are in Appendix A.
144
+
145
+ Baselines We compare DICE against three methodological categories: (1) Training-free method: This category utilizes pre-trained large and small language models for response generation through 0-shot prompt and In-Context Learning (ICL). We also leverage the reflection baseline that feeds format-violated outputs from LLM ICL to the LLM and instruct it to reflect and regenerate the answer. (2) Supervised Fine-Tuning (SFT) method: In this category, SLMs are directly fine-tuned on the supervised training data. (3) Model collaboration method: Aligner (Ji et al., 2024), BBox-Adapter (Sun et al., 2024), and CoBB (Kim et al., 2024). Aligner leverages SLM to learn the mapping between LLM output and the ground-truth answer. BBox-Adapter scores iterative LLM generations via a trained evaluator, then applies beam search for optimal reasoning path selection. CoBB first constructs contrastive examples, then deploys SLM to learn from the pair-wise preference data through the ORPO (Hong et al., 2024) algorithm. It should be noted that we apply all methods to the structured reasoning tasks, even though their original works primarily focus on content refinement.
146
+
147
+ Implementation Details We select Qwen2.5-72B-Instruct-GPTQ-Int4 (Yang et al., 2024) as the LLM. The initial SLMs are derived from the instruction-tuned models within the Qwen2.5 series. For ICL, models generate responses through a 2-shot prompt. For the BBox-Adapter, we utilize a single generation step and classification mode. For CoBB, we generate one positive and one negative reasoning for each question. For both SFT and Aligner, the SLMs are trained for 3 epochs. For our proposed DICE, we initially fine-tune the SLMs using SFT for 2 epochs, followed by 1 epoch of
148
+
149
+ <table><tr><td rowspan="2">SLM</td><td rowspan="2">LLM</td><td rowspan="2">Method</td><td colspan="2">GSM8K</td><td colspan="2">MATH</td><td colspan="2">CSQA</td><td colspan="2">MedQA-zh</td><td colspan="2">StrategyQA</td><td colspan="2">Average</td></tr><tr><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td></tr><tr><td rowspan="3">X</td><td>72B</td><td>0-shot</td><td>0.0</td><td>0.0</td><td>0.2</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>72B</td><td>ICL</td><td>64.2</td><td>61.8</td><td>4.8</td><td>4.2</td><td>97.4</td><td>82.6</td><td>77.0</td><td>67.2</td><td>95.2</td><td>74.2</td><td>67.7</td><td>58.0</td></tr><tr><td>72B</td><td>Reflection</td><td>64.4</td><td>62.0</td><td>4.8</td><td>4.2</td><td>97.4</td><td>82.6</td><td>77.2</td><td>67.4</td><td>95.2</td><td>74.2</td><td>67.8</td><td>58.1</td></tr><tr><td rowspan="7">0.5B</td><td>X</td><td>0-shot</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>8.8</td><td>0.2</td><td>0.0</td><td>0.0</td><td>1.8</td><td>0.0</td></tr><tr><td>X</td><td>ICL</td><td>10.8</td><td>2.6</td><td>1.2</td><td>0.4</td><td>76.2</td><td>14.4</td><td>57.8</td><td>8.2</td><td>84.3</td><td>47.6</td><td>46.1</td><td>14.6</td></tr><tr><td>X</td><td>SFT</td><td>98.6</td><td>25.6</td><td>94.6</td><td>12.4</td><td>100.0</td><td>57.6</td><td>100.0</td><td>45.2</td><td>96.9</td><td>59.0</td><td>98.0</td><td>40.0</td></tr><tr><td>72B</td><td>Aligner</td><td>99.6</td><td>46.2</td><td>96.2</td><td>48.6</td><td>100.0</td><td>78.6</td><td>100.0</td><td>86.8</td><td>96.9</td><td>69.4</td><td>98.5</td><td>65.9</td></tr><tr><td>72B</td><td>BBox-Adapter</td><td>91.0</td><td>83.8</td><td>11.2</td><td>9.4</td><td>99.4</td><td>85.2</td><td>94.8</td><td>83.8</td><td>95.2</td><td>79.0</td><td>78.3</td><td>68.2</td></tr><tr><td>72B</td><td>CoBB</td><td>98.4</td><td>94.2</td><td>96.6</td><td>76.6</td><td>97.4</td><td>79.6</td><td>99.8</td><td>84.0</td><td>99.6</td><td>71.2</td><td>98.4</td><td>81.1</td></tr><tr><td>72B</td><td>DICE (Ours)</td><td>99.6</td><td>95.2</td><td>99.4</td><td>79.0</td><td>100.0</td><td>85.8</td><td>100.0</td><td>88.0</td><td>100.0</td><td>78.2</td><td>99.8</td><td>85.2</td></tr><tr><td rowspan="7">1.5B</td><td>X</td><td>0-shot</td><td>0.0</td><td>0.0</td><td>0.4</td><td>0.2</td><td>7.4</td><td>0.0</td><td>13.8</td><td>0.0</td><td>0.0</td><td>0.0</td><td>4.3</td><td>0.0</td></tr><tr><td>X</td><td>ICL</td><td>77.4</td><td>46.0</td><td>1.0</td><td>0.6</td><td>81.2</td><td>31.0</td><td>99.0</td><td>31.4</td><td>98.3</td><td>53.7</td><td>71.4</td><td>32.5</td></tr><tr><td>X</td><td>SFT</td><td>98.8</td><td>46.4</td><td>98.0</td><td>25.8</td><td>100.0</td><td>70.8</td><td>100.0</td><td>71.2</td><td>98.3</td><td>65.9</td><td>99.0</td><td>56.0</td></tr><tr><td>72B</td><td>Aligner</td><td>98.6</td><td>51.0</td><td>98.2</td><td>38.6</td><td>100.0</td><td>79.2</td><td>100.0</td><td>80.6</td><td>100.0</td><td>73.8</td><td>99.4</td><td>64.6</td></tr><tr><td>72B</td><td>BBox-Adapter</td><td>93.4</td><td>87.0</td><td>11.4</td><td>9.6</td><td>99.0</td><td>85.0</td><td>98.4</td><td>88.2</td><td>94.8</td><td>78.6</td><td>79.4</td><td>69.7</td></tr><tr><td>72B</td><td>CoBB</td><td>97.4</td><td>92.8</td><td>96.0</td><td>74.8</td><td>98.6</td><td>82.4</td><td>99.0</td><td>83.6</td><td>100.0</td><td>74.7</td><td>98.2</td><td>81.7</td></tr><tr><td>72B</td><td>DICE (Ours)</td><td>99.8</td><td>95.6</td><td>99.6</td><td>79.8</td><td>100.0</td><td>85.6</td><td>100.0</td><td>87.8</td><td>100.0</td><td>79.5</td><td>99.9</td><td>85.7</td></tr><tr><td rowspan="7">3B</td><td>X</td><td>0-shot</td><td>82.0</td><td>63.8</td><td>60.6</td><td>38.6</td><td>89.6</td><td>1.4</td><td>54.2</td><td>0.0</td><td>84.3</td><td>55.9</td><td>74.1</td><td>31.9</td></tr><tr><td>X</td><td>ICL</td><td>92.6</td><td>75.0</td><td>63.0</td><td>37.0</td><td>96.6</td><td>65.0</td><td>94.2</td><td>52.0</td><td>98.3</td><td>62.0</td><td>88.9</td><td>58.2</td></tr><tr><td>X</td><td>SFT</td><td>99.4</td><td>63.4</td><td>99.0</td><td>32.0</td><td>100.0</td><td>77.6</td><td>100.0</td><td>73.6</td><td>100.0</td><td>69.9</td><td>99.7</td><td>63.3</td></tr><tr><td>72B</td><td>Aligner</td><td>99.8</td><td>61.8</td><td>98.8</td><td>50.6</td><td>100.0</td><td>80.0</td><td>100.0</td><td>83.6</td><td>99.1</td><td>74.2</td><td>99.5</td><td>70.0</td></tr><tr><td>72B</td><td>BBox-Adapter</td><td>93.2</td><td>85.8</td><td>10.6</td><td>8.8</td><td>99.8</td><td>85.6</td><td>98.8</td><td>88.6</td><td>95.2</td><td>78.6</td><td>79.5</td><td>69.5</td></tr><tr><td>72B</td><td>CoBB</td><td>97.0</td><td>92.2</td><td>96.4</td><td>74.6</td><td>98.4</td><td>83.4</td><td>99.6</td><td>86.4</td><td>99.6</td><td>73.4</td><td>98.2</td><td>82.0</td></tr><tr><td>72B</td><td>DICE (Ours)</td><td>99.6</td><td>94.2</td><td>99.6</td><td>77.8</td><td>100.0</td><td>84.4</td><td>100.0</td><td>88.0</td><td>99.6</td><td>80.4</td><td>99.8</td><td>85.0</td></tr></table>
150
+
151
+ Table 1: Performance comparison on five downstream reasoning tasks with XML output requirement. All models originate from the Qwen2.5 model series. The base LLM is Qwen2.5-72B-Instruct-GPTQ-Int4. For each model size of SLM, the highest and second-highest scores are highlighted in bold and underlined, respectively.
152
+
153
+ <table><tr><td rowspan="2">Method</td><td colspan="2">MATH JSON</td><td colspan="2">MATH YAML</td></tr><tr><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td></tr><tr><td>LLM (ICL)</td><td>10.4</td><td>7.6</td><td>0.0</td><td>0.0</td></tr><tr><td>SLM (SFT)</td><td>98.0</td><td>23.8</td><td>98.8</td><td>29.0</td></tr><tr><td>LLM + Aligner</td><td>97.2</td><td>46.6</td><td>98.6</td><td>44.6</td></tr><tr><td>LLM + BBox-Adapter</td><td>18.4</td><td>12.4</td><td>3.2</td><td>2.2</td></tr><tr><td>LLM + CoBB</td><td>99.0</td><td>76.2</td><td>99.6</td><td>76.0</td></tr><tr><td>LLM + DICE(Ours)</td><td>99.8</td><td>80.0</td><td>100.0</td><td>79.6</td></tr></table>
154
+
155
+ Table 2: Performance comparison on MATH specifying JSON and YAML output format. The model size of the LLM and SLM utilized is 72B and 1.5B.
156
+
157
+ GRPO fine-tuning. Further details on the implementation can be found in Appendix D.
158
+
159
+ # 4.2 Main Result
160
+
161
+ Table 1 presents the performance of DICE and other baselines on the five selected reasoning tasks with specialized XML output requirements. First, we observe that without any demonstrations, the Qwen2.5-72B-Instruct-GPTQ-Int4 largely fails to generate responses adhering to the specific formatting requirements specified in user instructions. When employing ICL, it exhibits substantial instability in format accuracy across diverse datasets. For instance, on more challenging benchmarks such as MATH, it tends to allocate excessive at
162
+
163
+ tention to problem-solving processes, consequently neglecting the formatting constraints outlined in instructions, results in substantially diminished formatting accuracy (below $5\%$ ). Furthermore, in the reflection baseline, even after feeding the incorrect responses along with feedback back into the LLM for regeneration, the format accuracy shows little improvement. This indicates that relying solely on LLMs cannot simultaneously balance output format and reasoning performance for these questions. However, after utilizing our proposed DICE framework with model size less than 3 billion parameters, near-perfect format accuracy (approaching $100\%$ ) is achieved consistently across all evaluated datasets, while simultaneously improving the content accuracy by an average of $29.4\%$ compared to LLM using ICL. Moreover, under identical SLM size, DICE achieves either the best or second-best performance across all datasets, and obtains the highest average scores in both F-Acc and C-Acc. It significantly outperforms the collaboration-based baselines, including Aligner, BBox-Adapter, and CoBB, with average content accuracy gains of $18.4\%$ , $16.1\%$ , and $3.6\%$ , respectively.
164
+
165
+ For more structures such as JSON and YAML, we also conduct experiments with 1.5B SLM on the MATH dataset, with results summarized in Table 2.
166
+
167
+ ![](images/8d50018b73ec551c006b23995369e7f2d069183acbc1301496e82234fb5b6861.jpg)
168
+ Figure 3: Cross-dataset generalization ability of different methods. The 1.5B SLMs trained on GSM8K and MATH through different methods are evaluated on test sets of both benchmarks. “A→B” represents models that are trained on A and tested on B.
169
+
170
+ It is observed that our proposed DICE consistently achieves the highest F-Acc and C-Acc for both JSON and YAML formats, outperforming other baseline approaches. These results indicate that our proposed DICE framework not only adheres to user-specified output format constraints but also effectively harnesses the respective reasoning capabilities of both large and small language models, leading to high content accuracy.
171
+
172
+ # 4.3 Generalizability Analysis
173
+
174
+ Cross-Model Generalizability Our DICE framework solely requires the outputs from the LLMs, making it applicable to diverse LLMs in a plug-and-play manner. To systematically evaluate the cross-model generalizability, we employ the 1.5B SLM (trained to adapt Qwen2.5-72B-Instruct-GPTQ-Int4 in Table 1) to adapt other distinct LLMs: Qwen2.5-7B-Instruct (Yang et al., 2024), Meta-Llama-3-8B-Instruct (Grattafori et al., 2024), and GPT-4.1-mini (OpenAI, 2025) to tasks with XML format requirements. As presented in Table 3, the results demonstrate that DICE can successfully adapt various large models to all XML-constrained reasoning tasks, achieving an average F-Acc exceeding $99.5\%$ . Notably, DICE presents significant improvements in C-Acc compared to ICL baselines, with average gains of $16.3\%$ (Qwen2.5-7B-Instruct), $5.8\%$ (Meta-Llama-3-8B-Instruct), and $0.6\%$ (GPT-4.1-min). Furthermore, compared to other baselines, DICE achieves superior performance in both F-Acc and C-Acc, underscoring its strong capability for cross-model generalization.
175
+
176
+ Cross-Dataset Generalizability To further investigate the generalization performance of our method across datasets, we conduct cross-dataset validation experiments using the 1.5B SLM in Ta
177
+
178
+ ble 1 on GSM8K and MATH datasets. Specifically, apart from the consistent train and test datasets, we assess both cross-dataset generalization scenarios: (1) evaluating MATH test performance of models trained on GSM8K, and (2) evaluating GSM8K test performance of models trained on MATH. The experimental results are visualized in Figure 3. It is observed that our DICE framework consistently achieves SOTA performance across all four evaluation dimensions compared to other baselines, which indicates that DICE not only enables SLM to effectively acquire domain-specific knowledge but also maintains strong performance across diverse datasets. Notably, when applying models trained on GSM8K to the more challenging MATH test set, DICE demonstrates substantial improvement in CAcc, exceeding all baselines by at least $35\%$ . This significant performance gap underscores DICE's capability to effectively leverage LLM outputs for generating more accurate responses, even when confronted with test data that exhibits greater complexity than the training samples. These findings collectively establish that the DICE framework attains exceptional cross-dataset generalizability relative to existing baseline methods.
179
+
180
+ # 4.4 Effectiveness Analysis
181
+
182
+ To further investigate the insights underlying the effectiveness of our proposed DICE framework, we conduct a comprehensive comparative analysis of consistency between natural language LLM outputs $(y_{o})$ and the structured outputs from SLM $(\hat{t})$ across different generative model collaboration strategies (Aligner, CoBB, and DICE). We propose four metrics to evaluate the consistency: (1) Consistent Correct Rate (CCR): The proportion of samples where both $y_{o}$ and $\hat{t}$ are correct. (2) Correction Rate (ECR): The proportion of samples where $y_{o}$ is incorrect but is corrected by the SLM. (3) Mis-correction Rate (CER): The proportion of samples where $y_{o}$ is correct but becomes incorrect after adaptation. (4) Consistent Error Rate (EER): The proportion of samples where both $y_{o}$ and $\hat{t}$ are incorrect. The consistency performance of the generative methods is presented in Figure 4.
183
+
184
+ The results demonstrate that DICE substantially outperforms both Aligner and CoBB by achieving significantly higher CCR and lower CER. Notably, DICE maintains a CER below $2\%$ across all datasets, indicating its strong ability to preserve the correctness of LLM outputs. This suggests that the analyze-then-answer paradigm adopted in
185
+
186
+ <table><tr><td rowspan="2">Method</td><td colspan="2">GSM8K</td><td colspan="2">MATH</td><td colspan="2">CSQA</td><td colspan="2">MedQA-zh</td><td colspan="2">StrategyQA</td><td colspan="2">Average</td></tr><tr><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td></tr><tr><td>Qwen2.5-7B-Instruct (ICL)</td><td>96.2</td><td>85.0</td><td>78.8</td><td>52.6</td><td>55.8</td><td>41.4</td><td>93.8</td><td>72.0</td><td>83.8</td><td>61.6</td><td>81.7</td><td>62.5</td></tr><tr><td>Qwen2.5-7B-Instruct + Aligner</td><td>99.4</td><td>54.8</td><td>97.2</td><td>40.8</td><td>100.0</td><td>77.2</td><td>100.0</td><td>76.6</td><td>99.6</td><td>66.7</td><td>99.2</td><td>63.2</td></tr><tr><td>Qwen2.5-7B-Instruct + BBox-Adapter</td><td>95.4</td><td>84.0</td><td>73.8</td><td>43.8</td><td>49.4</td><td>37.4</td><td>91.6</td><td>72.2</td><td>83.0</td><td>60.7</td><td>78.6</td><td>59.6</td></tr><tr><td>Qwen2.5-7B-Instruct + CoBB</td><td>98.0</td><td>91.0</td><td>95.4</td><td>66.6</td><td>99.2</td><td>78.0</td><td>98.4</td><td>73.0</td><td>97.8</td><td>69.0</td><td>97.8</td><td>75.5</td></tr><tr><td>Qwen2.5-7B-Instruct + DICE</td><td>99.8</td><td>92.2</td><td>99.4</td><td>71.0</td><td>100.0</td><td>78.6</td><td>100.0</td><td>81.4</td><td>99.6</td><td>70.7</td><td>99.8</td><td>78.8</td></tr><tr><td>Meta-Llama-3-8B-Instruct (ICL)</td><td>98.8</td><td>76.4</td><td>82.4</td><td>25.8</td><td>97.4</td><td>63.6</td><td>97.0</td><td>39.8</td><td>96.5</td><td>66.4</td><td>94.4</td><td>54.4</td></tr><tr><td>Meta-Llama-3-8B-Instruct + Aligner</td><td>99.6</td><td>52.0</td><td>98.0</td><td>27.2</td><td>100.0</td><td>68.4</td><td>100.0</td><td>65.0</td><td>100.0</td><td>64.2</td><td>99.5</td><td>55.4</td></tr><tr><td>Meta-Llama-3-8B-Instruct + BBox-Adapter</td><td>99.0</td><td>75.4</td><td>80.4</td><td>20.4</td><td>86.8</td><td>54.8</td><td>80.2</td><td>36.8</td><td>92.1</td><td>68.6</td><td>87.7</td><td>51.2</td></tr><tr><td>Meta-Llama-3-8B-Instruct + CoBB</td><td>98.6</td><td>80.6</td><td>91.2</td><td>31.4</td><td>96.4</td><td>64.4</td><td>98.2</td><td>46.0</td><td>98.3</td><td>63.8</td><td>96.5</td><td>57.2</td></tr><tr><td>Meta-Llama-3-8B-Instruct + DICE</td><td>100.0</td><td>81.6</td><td>99.6</td><td>33.2</td><td>100.0</td><td>68.4</td><td>100.0</td><td>48.6</td><td>97.8</td><td>69.0</td><td>99.5</td><td>60.2</td></tr><tr><td>GPT-4.1-mini (ICL)</td><td>100.0</td><td>96.2</td><td>78.6</td><td>67.4</td><td>100.0</td><td>82.8</td><td>97.8</td><td>76.2</td><td>100.0</td><td>83.0</td><td>95.3</td><td>81.1</td></tr><tr><td>GPT-4.1-mini + Aligner</td><td>99.8</td><td>55.2</td><td>96.2</td><td>35.2</td><td>100.0</td><td>81.4</td><td>100.0</td><td>77.0</td><td>98.3</td><td>75.1</td><td>98.9</td><td>64.8</td></tr><tr><td>GPT-4.1-mini + BBox-Adapter</td><td>100.0</td><td>92.0</td><td>81.4</td><td>65.6</td><td>98.2</td><td>81.0</td><td>96.6</td><td>74.0</td><td>99.6</td><td>82.1</td><td>95.2</td><td>78.9</td></tr><tr><td>GPT-4.1-mini + CoBB</td><td>96.2</td><td>90.4</td><td>81.8</td><td>59.0</td><td>97.6</td><td>80.4</td><td>98.6</td><td>72.8</td><td>93.7</td><td>73.8</td><td>93.6</td><td>75.3</td></tr><tr><td>GPT-4.1-mini + DICE</td><td>100.0</td><td>97.0</td><td>99.4</td><td>69.4</td><td>100.0</td><td>82.8</td><td>100.0</td><td>78.2</td><td>100.0</td><td>81.2</td><td>99.9</td><td>81.7</td></tr></table>
187
+
188
+ ![](images/515a150d3ce0126d6f05174bd0bb89f1abb1b1b1ff94d6b1b210db9ae180d8f3.jpg)
189
+ Figure 4: The consistency analysis between natural language outputs from LLM and outputs in XML format from 1.5B SLMs in generative approaches. We investigate the consistency in output correctness using four evaluation metrics: Mis-correction Rate (CER), Correction Rate (ECR), Consistent Error Rate (EER), and Consistent Correct Rate (CCR). These metrics provide a comprehensive insight into the strengths of the DICE framework.
190
+
191
+ DICE enables the SLM to more effectively utilize the information embedded in LLM outputs without introducing unnecessary modifications. Moreover, for questions beyond LLM's knowledge coverage, DICE still demonstrates the capability to partially correct erroneous outputs. In contrast, the Aligner and CoBB suffer from "over-correction dilemma": though they can achieve relatively high ECR, revealing the ability to correct errors for questions outside the LLM's knowledge domain, they also exhibit high CER which indicates that they fail to adequately analyze and assess the validity of LLM outputs, leading to erroneous modifications and an overall decline in performance.
192
+
193
+ # 4.5 Latency Analysis
194
+
195
+ In the inference stage, while our DICE framework and other model collaboration baselines enhance performance, they also introduce latency when compared to LLMs that directly employ ICL. To quantitatively assess this overhead, we compare the inference times of LLM with ICL against various
196
+
197
+ Table 3: Cross-model generalization ability of different methods. The 1.5B SLMs of all methods are trained to adapt Qwen2.5-72B-Instruct-GPTQ-Int4 to reasoning tasks with XML format requirements.
198
+
199
+ <table><tr><td>Method</td><td>Time(s/sample)</td><td>C-Acc(%)</td></tr><tr><td>LLM (ICL)</td><td>0.9191</td><td>4.2</td></tr><tr><td>LLM + Aligner</td><td>1.1090</td><td>38.6</td></tr><tr><td>LLM + BBox-Adapter</td><td>2.2592</td><td>9.6</td></tr><tr><td>LLM + CoBB</td><td>1.0854</td><td>74.8</td></tr><tr><td>LLM + DICE(Ours)</td><td>1.0999</td><td>79.8</td></tr></table>
200
+
201
+ Table 4: Inference time and performance of different methods. The reported time represents the average inference latency per sample (in seconds) across different methods.
202
+
203
+ model collaboration approaches. The evaluation experiment is conducted over the same 100 samples from the MATH test set on two NVIDIA A100 GPUs. The model collaboration methods (Aligner, Bbox-Adapter, CoBB, and DICE) use the Qwen2.5-72B-Instruct-GPTQ-Int4 LLM with the fine-tuned Qwen2.5-1.5B-Instruct SLM. To mitigate random variance, each method is evaluated across five independent inference runs, and the average latency is reported as the final result in Table 4.
204
+
205
+ The experimental results indicate that our DICE
206
+
207
+ <table><tr><td rowspan="2">Strategy</td><td colspan="2">MATH</td><td colspan="2">CSQA</td></tr><tr><td>F-Acc</td><td>C-Acc</td><td>F-Acc</td><td>C-Acc</td></tr><tr><td>SFT</td><td>98.4</td><td>75.6</td><td>100.0</td><td>82.8</td></tr><tr><td>GRPO</td><td>92.0</td><td>73.2</td><td>89.2</td><td>77.2</td></tr><tr><td>SFT+GRPO</td><td>99.8</td><td>78.8</td><td>100.0</td><td>85.6</td></tr></table>
208
+
209
+ Table 5: Ablation experiments on fine-tuning strategy used in DICE framework. For all experiments, the 1.5B SLMs are fine-tuned for three epochs.
210
+
211
+ framework introduces an additional latency of approximately $20\%$ compared to the LLM with ICL. However, this modest time overhead yields substantial performance gains: content accuracy improves by over $70\%$ on the MATH test set and by an average of more than $25\%$ across five different reasoning tasks selected in this work. The significant performance gains achieved with minimal time overhead make this trade-off entirely worthwhile. Furthermore, compared to other model collaboration methods, our DICE framework achieves optimal performance for a comparable computational cost, which demonstrates the high efficiency of our approach.
212
+
213
+ # 4.6 Ablation Study
214
+
215
+ Fine-tuning Strategy Ablation To validate the efficacy of our DICE framework, we conduct the ablation studies by replacing the finetuning procedure with either SFT or the GRPO algorithm exclusively. We train the 1.5B small models for 3 epochs on the structured chain-of-thought adaptation MATH and CSQA benchmarks (more training details can be found in Appendix D.2). The experimental results are presented in Table 5. It is observed that with the same fine-tuning epochs, the GRPO-only strategy exhibits suboptimal performance in both format adherence and content accuracy, which suggests that the GRPO algorithm is inefficient for models to learn format information from scratch. In contrast, the SFT-only strategy enables the model to effectively learn the user-specified output format and achieve competitive answer C-Acc compared to other baselines in Table 1, which reveals the effectiveness of the analyze-then-answer generation pattern of data construction in Section 3.1. Moreover, as discussed in Section 3.2, our dual-tuning strategy that applies GRPO after SFT enables the model to better attend to output format and final answer, achieving the highest F-Acc and C-Acc scores.
216
+
217
+ Correctness Ratio of $y_{o}$ Ablation To quantify the impact of the correctness ratio of $y_{o}$ in the training data and to determine the optimal ratio, we conduct a controlled experiment on the MATH dataset. In this experiment, we maintain a constant training set size of 5,000 samples while varying the proportion of correct $y_{o}$ across four configurations: $100\%$ , $75\%$ , $50\%$ , and $25\%$ . All $y_{o}$ outputs are generated by the Qwen2.5-72B-Instruct-GPTQ-Int4 to ensure consistency. Subsequently, we train four distinct 1.5B SLMs using these training sets and evaluate their performance on the test set by pairing them with various LLMs. The results of this experiment are presented in Appendix E. The experimental results indicate that the higher the proportion of correct $y_{o}$ in the training set, the more the model tends to inherit the answers from the LLM, thereby reducing miscorrections and enabling better collaboration with stronger LLMs. In contrast, SLMs trained on datasets with lower $y_{o}$ correctness ratios (e.g., $50\%$ ) exhibit stronger correction capabilities, making them more effective when paired with weaker LLMs. However, this trend does not extend linearly to extreme cases. For instance, training with only $25\%$ correct $y_{o}$ resulted in suboptimal performance across all LLM backbones. This is likely because the SLM's inherent capacity limitations prevent it from achieving high correction accuracy when exposed to predominantly incorrect examples. Overall, the model trained with $50\%$ correct $y_{o}$ not only achieved the best performance but also exhibited greater stability and robustness.
218
+
219
+ # 5 Conclusion
220
+
221
+ In this paper, we propose the DICE framework, a highly efficient and plug-and-play approach that adapts LLMs to structured reasoning tasks. We construct the structured chain-of-thought adaptation datasets that guide SLM to reason before generating final answers. We also design a dual-tuning strategy that leverages the strengths of both SFT and GRPO algorithms. Experimental results demonstrate that DICE achieves near-perfect format adherence while maintaining superior content accuracy, coupled with exceptional generalization capabilities. It addresses the issue of LLMs' instruction-following limitations without directly fine-tuning, achieving a dual enhancement in both reasoning and instruction-following. This innovation holds significant practical value in real-world applications with specific user requirements.
222
+
223
+ # Limitations
224
+
225
+ Although our proposed DICE framework can effectively balance the trade-off between LLMs' instruction-following and reasoning capabilities, demonstrating superior performance in both structure adherence and content accuracy, it nonetheless possesses some limitations. For example, our approach introduces additional computational overhead. For each query, it first invokes LLM to generate natural language outputs, which are then refined by the SLM. However, for relatively simple questions that can be adequately addressed by SLM, invoking LLM is unnecessary and increases both computational cost and latency. Future work could incorporate a mechanism to assess question complexity beforehand, selectively engaging the LLM only when necessary, thereby optimizing resource usage.
226
+
227
+ # Ethical Considerations
228
+
229
+ All models utilized in this study, except for GPT-4.1-mini, and all datasets are open-source. We downloaded the open-source models from their official releases on Hugging Face and accessed GPT models via the OpenAI API. Throughout, we strictly comply with all applicable user licenses. The datasets utilized in this research are sourced from the officially published repositories and are used exclusively for academic research purposes. All datasets are widely used and contain no personal or sensitive information. Therefore, there is no risk of personal information leakage here.
230
+
231
+ For AI usage, we only use AI assistants to check typos and grammar errors when writing.
232
+
233
+ # Acknowledgements
234
+
235
+ This work was supported by the National Key R&D Program of China (No. 2022ZD0162101).
236
+
237
+ # References
238
+
239
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
240
+ Pranjal Aggarwal, Aman Madaan, Ankit Anand, Srividya Pranavi Potharaju, Swaroop Mishra, Pei Zhou, Aditya Gupta, Dheeraj Rajagopal, Karthik Kappaganthu, Yiming Yang, and 1 others. 2024. Automix: Automatically mixing language models. Advances in
241
+
242
+ Neural Information Processing Systems, 37:131000-131034.
243
+ Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and Rémi Munos. 2023. A general theoretical paradigm to understand learning from human preferences. Preprint, arXiv:2310.12036.
244
+ Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, and 1 others. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
245
+ Dong Chen, Shuo Zhang, Yueting Zhuang, Siliang Tang, Qidong Liu, Hua Wang, and Mingliang Xu. 2024. Improving large models with small models: Lower costs and better performance. Preprint, arXiv:2406.15471.
246
+ Hailin Chen, Fangkai Jiao, Mathieu Ravaut, Nawshad Farruque, Xuan Phi Nguyen, Chengwei Qin, Manan Dey, Bosheng Ding, Caiming Xiong, Shafiq Joty, and Yingbo Zhou. 2025a. StructTest: Benchmarking LLMs' Reasoning through Compositional Structured Outputs. Preprint, arXiv:2412.18011.
247
+ Zhe Chen, Yusheng Liao, Shuyang Jiang, Pingjie Wang, Yiqiu Guo, Yanfeng Wang, and Yu Wang. 2025b. Towards omni-rag: Comprehensive retrieval-augmented generation for large language models in medical applications. arXiv preprint arXiv:2501.02460.
248
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
249
+ Yixin Dong, Charlie F. Ruan, Yaxing Cai, Ruihang Lai, Ziyi Xu, Yilong Zhao, and Tianqi Chen. 2025. Xgrammar: Flexible and efficient structured generation engine for large language models. Preprint, arXiv:2411.15100.
250
+ Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. Preprint, arXiv:2402.01306.
251
+ Yijiang Fan, Yuren Mao, Longbin Lai, Ying Zhang, Zhengping Qian, and Yunjun Gao. 2025. G-boost: Boosting private slms with general llms. arXiv preprint arXiv:2503.10367.
252
+ Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346-361.
253
+
254
+ Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
255
+ Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948.
256
+ Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874.
257
+ Jiwoo Hong, Noah Lee, and James Thorne. 2024. Reference-free monolithic preference optimization with odds ratio. arXiv e-prints, pages arXiv-2403.
258
+ Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, and 1 others. 2022. Lora: Low-rank adaptation of large language models. ICLR, 1(2):3.
259
+ Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276.
260
+ Jiaming Ji, Boyuan Chen, Hantao Lou, Donghai Hong, Borong Zhang, Xuehai Pan, Tianyi Alex Qiu, Juntao Dai, and Yaodong Yang. 2024. Aligner: Efficient alignment by learning to correct. Advances in Neural Information Processing Systems, 37:90853-90890.
261
+ Shuyang Jiang, Yusheng Liao, Ya Zhang, Yanfeng Wang, and Yu Wang. 2024. Taia: Large language models are out-of-distribution data learners. Preprint, arXiv:2405.20192.
262
+ Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421.
263
+ Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Preprint, arXiv:2001.08361.
264
+ Jaehyung Kim, Dongyoung Kim, and Yiming Yang. 2024. Learning to correct for qa reasoning with black-box llms. arXiv preprint arXiv:2406.18695.
265
+ Yujin Kim, Euiin Yi, Minu Kim, Se-Young Yun, and Taehyeon Kim. 2025. Guiding reasoning in small language models with llm assistance. arXiv preprint arXiv:2504.09923.
266
+
267
+ Terry Koo, Frederick Liu, and Luheng He. 2024. Automata-based constraints for language model decoding. Preprint, arXiv:2407.08103.
268
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. Preprint, arXiv:2309.06180.
269
+ Yizhi LI, Ge Zhang, Xingwei Qu, Jiali Li, Zhaoqun Li, Zekun Wang, Hao Li, Ruibin Yuan, Yinghao Ma, Kai Zhang, Wangchunshu Zhou, Yiming Liang, Lei Zhang, Lei Ma, Jiajun Zhang, Zuowen Li, Stephen W. Huang, Chenghua Lin, and Jie Fu. 2024. CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models. Preprint, arXiv:2402.13109.
270
+ Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, and Noah A Smith. 2024. Tuning language models by proxy. arXiv preprint arXiv:2401.08565.
271
+ Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of-thought reasoning. Preprint, arXiv:2301.13379.
272
+ Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, and Ajmal Mian. 2024. A comprehensive overview of large language models. Preprint, arXiv:2307.06435.
273
+ OpenAI. 2025. Gpt-4.1 mini documentation. https://platform.openai.com/docs/models/gpt-4.1-mini.
274
+ Aitor Ormazabal, Mikel Artetxe, and Eneko Agirre. 2023. Comblm: Adapting black-box language models through small fine-tuned models. arXiv preprint arXiv:2305.16876.
275
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, and 1 others. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
276
+ Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, Pengfei Liu, and Dong Yu. 2024. Infobench: Evaluating instruction following ability in large language models. arXiv preprint arXiv:2401.03601.
277
+ Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Preprint, arXiv:2305.18290.
278
+
279
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. Preprint, arXiv:1707.06347.
280
+ Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, and 1 others. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300.
281
+ Connor Shorten, Charles Pierse, Thomas Benjamin Smith, Erika Cardenas, Akanksha Sharma, John Trengrove, and Bob van Luijt. 2024. Structuredrag: Json response formatting with large language models. arXiv preprint arXiv:2408.11061.
282
+ Gaurav Srivastava, Shuxiang Cao, and Xuan Wang. 2025. Towards reasoning ability of small language models. Preprint, arXiv:2502.11569.
283
+ Haotian Sun, Yuchen Zhuang, Wei Wei, Chao Zhang, and Bo Dai. 2024. Bbox-adapter: Lightweight adapting for black-box large language models. arXiv preprint arXiv:2402.08219.
284
+ Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937.
285
+ Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-Yen Lin, Hung-yi Lee, and Yun-Nung Chen. 2024. Let me speak freely? a study on the impact of format restrictions on performance of large language models. arXiv preprint arXiv:2408.02442.
286
+ Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, and 1 others. 2024. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295.
287
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. Preprint, arXiv:2201.11903.
288
+ Brandon T. Willard and Rémi Louf. 2023. Efficient guided generation for large language models. Preprint, arXiv:2307.09702.
289
+ Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, Ran Xu, Wenpeng Yin, and Caiming Xiong. 2024. Fofo: A benchmark to evaluate llms' format-following capability. arXiv preprint arXiv:2402.18667.
290
+ An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, and 1 others. 2024. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115.
291
+
292
+ Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. 2022. Star: Bootstrapping reasoning with reasoning. Preprint, arXiv:2203.14465.
293
+ Hao Zhao, Maksym Andriushchenko, Francesco Croce, and Nicolas Flammarion. 2025. Is in-context learning sufficient for instruction following in llms? Preprint, arXiv:2405.19874.
294
+ Yuze Zhao, Jintao Huang, Jinghan Hu, Xingjun Wang, Yunlin Mao, Daoze Zhang, Zeyinzi Jiang, Zhikai Wu, Baole Ai, Ang Wang, Wenmeng Zhou, and Yingda Chen. 2024. Swift:a scalable lightweight infrastructure for fine-tuning. Preprint, arXiv:2408.05517.
295
+ Wenhao Zheng, Yixiao Chen, Weitong Zhang, Souvik Kundu, Yun Li, Zhengzhong Liu, Eric P. Xing, Hongyi Wang, and Huaxiu Yao. 2025. Citer: Collaborative inference for efficient large language model decoding with token-level routing. Preprint, arXiv:2502.01976.
296
+ Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of $100+$ language models. Preprint, arXiv:2403.13372.
297
+ Ruiqi Zhong, Dhruba Ghosh, Dan Klein, and Jacob Steinhardt. 2021. Are larger pretrained language models uniformly better? comparing performance at the instance level. *Preprint*, arXiv:2105.06020.
298
+ Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. 2023. Instruction-following evaluation for large language models. arXiv preprint arXiv:2311.07911.
299
+
300
+ # A Datasets and Metrics
301
+
302
+ This section presents further details about the datasets and the two evaluation metrics used in the work.
303
+
304
+ # A.1 Datasets
305
+
306
+ In the experiments of this work, we select five commonly used datasets from four types of reasoning tasks: mathematical reasoning, commonsense reasoning, task-specific reasoning, and implicit reasoning.
307
+
308
+ - GSM8K (Cobbe et al., 2021), short for "Grade School Math 8K", is a benchmark consisting of 8,500 question-answering math problems designed to evaluate the fundamental mathematical reasoning abilities of models. In this dataset, each problem is paired with step-by-step reasoning and the numerical answer.
309
+
310
+ - MATH (Hendrycks et al., 2021) is a more challenging question-answering dataset consisting of 12,500 algebra, calculus, geometry, and precalculus problems sourced from high school mathematics competitions. Compared to GSM8K, MATH requires deeper mathematical knowledge and more sophisticated multi-step problem-solving capabilities.
311
+ - CommonsenseQA (Talmor et al., 2018) is a multiple-choice question-answering benchmark designed to evaluate model's commonsense reasoning ability. It contains 12.1k questions sourced from ConceptNet, requiring models to leverage general world knowledge and contextual understanding to select the correct answers from the given five options.
312
+ MedQA (Jin et al., 2021) is a domain-specific dataset containing 61,097 multiple-choice questions from the United States Medical Licensing Examination (English), Chinese National Medical Licensing Exam (simplified Chinese), and Taiwan's Medical Licensing Exam (traditional Chinese). In this paper, we only sample data from the simplified Chinese subset to enrich the language diversity of our experiment, so we denote the dataset as MedQA-zh.
313
+ - StrategyQA (Geva et al., 2021) is a binary benchmark for evaluating model's strategic reasoning and implicit task understanding capabilities. It includes 2,290 True/False questions that require models to decompose complex problems into executable reasoning steps.
314
+
315
+ Notably, to reduce computational costs, we randomly sample 1,000 training examples and 500 test examples from each dataset as our original training and evaluation sets (the test set of StrategyQA contains only 229 examples). When constructing the new structured chain-of-thought dataset illustrated in Section 3.1, we prompt the LLM to sample five responses for each training sample. This allows us to expand the size of the training set to approximately 5,000 examples without increasing the number of LLM invocations, although some samples are filtered out after the two-stage rationale generation process.
316
+
317
+ # A.2 Metrics
318
+
319
+ We design two metrics: format accuracy and content accuracy, to respectively evaluate the model's
320
+
321
+ # XML
322
+
323
+ ```txt
324
+ ```
325
+ xml format(\n)
326
+ <?xml version="1.0" encoding="UTF8"?>(\n)
327
+ <Answer>(\n)
328
+ <Step by step reasoning>{"step1":"{reasoning1 placeholder}", "step2":"{reasoning2 placeholder}",...}</Step by step reasoning>(\n)
329
+ <Final answer>{answer placeholder}</Final answer>(\n)
330
+ </Answer>(\n)
331
+ ```
332
+
333
+ # JSON
334
+
335
+ ```snap
336
+ ```
337
+ ```
338
+ ```
339
+ ```
340
+ ```
341
+ ```
342
+ ```
343
+ ```
344
+ ```
345
+ ```
346
+ ```
347
+ ```
348
+ ```
349
+ ```
350
+ ```
351
+ ``
352
+ ```
353
+
354
+ # YAML
355
+
356
+ ```txt
357
+ ```
358
+ ```r
359
+ ```
360
+ ```
361
+ ```
362
+ ```
363
+ ```
364
+ ```
365
+ ```
366
+ ```
367
+ ```
368
+ ```
369
+ ```
370
+ ```
371
+ ```
372
+ ```
373
+ ```
374
+ ```
375
+ ```
376
+ ```
377
+ ```
378
+ ```
379
+ ```
380
+ ```
381
+ ```
382
+ ```
383
+ ```
384
+ ```
385
+ ```
386
+ ```
387
+ ```
388
+ ```
389
+ ```
390
+ ```
391
+ ```
392
+ ```
393
+ ```
394
+ ```
395
+ ```
396
+ ```
397
+ ``
398
+ ```
399
+
400
+ Figure 5: Format templates for GSM8K, MATH, and StrategyQA datasets. The symbol $(\backslash n)$ indicates the presence of a newline character at that specific position.
401
+
402
+ format adherence and reasoning capability:
403
+
404
+ - Format accuracy (F-Acc): Proportion of samples with correct output format in all outputs. We employ string matching to extract the model outputs. An output is classified as having correct formatting only if it contains all specified keywords in the required format or can be automatically converted into the specific structure using standard toolkits.
405
+ - Content accuracy (C-Acc): Content accuracy is measured by calculating the Exact Match (EM) score between the extracted final answer from the model's response and the ground truth. Notably, final answers can only be fully extracted when the output format is correct; for format-incorrect samples, their final answers remain unextractable and are therefore judged as incorrect. Consequently, any output with correct content must necessarily have a correct format.
406
+
407
+ # B Format Details
408
+
409
+ In our experiment, we employed three formats: XML, JSON, and YAML. For the three question-answering datasets, including GSM8K, MATH,
410
+
411
+ ```txt
412
+ XML
413
+ ``xml format(\n)
414
+ <?xml version="1.0" encoding="UTF8"?>(\n) <Answer>(\n) <Step by step reasoning>{"step1":"\{reasoning1 placeholder"}, "step2":"\{reasoning2 placeholder\},...}</Step by step reasoning>(\n) <Answer label>\{answer label placeholder\}</Answer label>(\n) <Result>\{result placeholder\}</Result>(\n) </Answer>(\n)
415
+ ```
416
+ JSON
417
+ ```
418
+ json format(\n) {"step by step reasoning":"step1":"\{reasoning1 placeholder","step2":"\{reasoning2 placeholder\},...}\),"answer label":"\{answer label placeholder\}", "result":"\{result placeholder\}"\}(\n)
419
+ ```
420
+ YAML
421
+ ```
422
+ yaml format(\n)
423
+ step_by_step_reasoning: (\n) -step1:" \{reasoning1 placeholder\}"(\n) -step2:" \{reasoning2 placeholder\}"(\n) ...(\n)
424
+ answer_label: (\n) -label:" \{answer label placeholder\}"(\n) result: (\n) -answer:" \{answer placeholder\}"(\n)
425
+ ```
426
+
427
+ Figure 6: Format templates for CommonsenseQA and MedQA datasets. The symbol $(\backslash n)$ indicates the presence of a newline character at that specific position.
428
+
429
+ and StrategyQA, the formatted output incorporates step-by-step reasoning along with the final answer. For multiple-choice datasets including CommonsenseQA and MedQA, the model is additionally required to generate the selected option label along with its corresponding answer, in addition to the reasoning steps. The basic templates of the three formats for GSM8K, MATH, StrategyQA and CommonsenseQA, MedQA are shown in Figure 5 and 6, respectively. As illustrated in Figure 7, we provide representative examples from MATH, CommonsenseQA, and StrategyQA.
430
+
431
+ # C Algorithm Details of DICE
432
+
433
+ In this section, we elaborate on the core details of the GRPO algorithm and present the complete DICE algorithm in Algorithm 1.
434
+
435
+ # C.1 GRPO Algorithm Details
436
+
437
+ In the fine-tuning process of LLMs, reinforcement learning (RL) plays a pivotal role (Schulman et al., 2017; Rafailov et al., 2024; Azar et al., 2023; Ethayarajh et al., 2024). Although the traditional Proximal Policy Optimization (PPO (Schulman et al., 2017)) algorithm has been widely adopted for LLM fine-tuning, it requires maintaining a separate value
438
+
439
+ network comparable in size to the policy model for advantage function estimation, leading to substantial memory consumption and computational overhead in large-scale scenarios. To address these challenges, the Group Relative Policy Optimization (GRPO (Shao et al., 2024)) algorithm is proposed, which seeks to minimize dependence on value networks while preserving the stability and efficiency of policy updates.
440
+
441
+ The GRPO framework operates by sampling a group of actions from the current policy and calculating relative advantages within this group, thereby eliminating the need for a critic model. The advantage estimation can be formulated as:
442
+
443
+ $$
444
+ A _ {i} = \frac {r _ {i} - \operatorname {m e a n} (\{r _ {1} , r _ {2} , \cdots , r _ {G} \})}{\operatorname {s t d} (\{r _ {1} , r _ {2} , \cdots , r _ {G} \})} \tag {8}
445
+ $$
446
+
447
+ where $G$ denotes the group size (number of sampled actions per iteration). The complete loss function of GRPO can be expressed as:
448
+
449
+ $$
450
+ \begin{array}{l} \mathcal {L} _ {G R P O} (\theta) = \frac {1}{G} \sum_ {i = 1} ^ {G} \left(\min \left(\frac {\pi_ {\theta} (o _ {i})}{\pi_ {\theta_ {\mathrm {o l d}}} (o _ {i})} A _ {i}, \operatorname {c l i p} \left(\frac {\pi_ {\theta} (o _ {i})}{\pi_ {\theta_ {\mathrm {o l d}}} (o _ {i})}, \right. \right. \right. \tag {9} \\ \left. \left. 1 - \varepsilon , 1 + \varepsilon\right) A _ {i}\right) - \beta \mathbb {D} _ {K L} \left(\pi_ {\theta} \| \pi_ {\text {r e f}}\right) \\ \end{array}
451
+ $$
452
+
453
+ # C.2 Overview of DICE Algorithm
454
+
455
+ The overview algorithm of our proposed DICE framework is shown in Algorithm 1.
456
+
457
+ # D Implementation Details
458
+
459
+ # D.1 Structured Data Construction
460
+
461
+ For SFT and Aligner, we constructed the training sets using only the original data. Since the CommonsenseQA and MedQA datasets provide only the answers without any reasoning information, the target structured outputs for these two datasets in SFT and Aligner contain only the selected option label and the corresponding answer, without step-by-step reasoning. The step-by-step reasoning in GSM8K, MATH, and StrategyQA is derived from the original benchmarks.
462
+
463
+ For BBox-Adapter, we first prompt the Qwen2.5-72B-Instruct-GPTQ-Int4 to generate reasoning for each training sample in CommonsenseQA and MedQA. Then we use the reasoning and ground-truth answer of all five benchmarks to construct standard structured outputs. Subsequently, we instruct the LLM to generate five candidates for each question in the training set with a 2-shot prompt. The small model was then trained using both the standard structured outputs and candidates.
464
+
465
+ For CoBB, we generate positive reasoning via Qwen2.5-72B-Instruct-GPTQ-Int4 and randomly sample reasoning from other questions as negative reasoning. This approach allowed us to construct both positive and negative structured outputs.
466
+
467
+ # D.2 Fine-tuning Details
468
+
469
+ Our experiments were conducted on NVIDIA A100 GPUs (80G memory). When running all baselines and our proposed DICE method, we utilized the vLLM (Kwon et al., 2023), LLamaFactory (Zheng et al., 2024), and SWIFT (Zhao et al., 2024) frameworks for model fine-tuning.
470
+
471
+ Main Experiment In the main experiments (Tables 1 and 2), we use Qwen2.5-72B-Instruct-GPTQ-Int4 as the large language model, and Qwen2.5-0.5B-Instruct, Qwen2.5-1.5B-Instruct, and Qwen2.5-3B-Instruct as the small models. All models are trained on two A100 GPUs with bf16 precision. Other detailed experimental configurations are: (1) For SFT and Aligner baselines, LoRA fine-tuning is applied with rank 32 and alpha 64 for 3 epochs. The training process uses a batch size of 64, learning rate of $2 \times 10^{-4}$ , warmup ratio of 0.1, and weight decay of 0.1. (2) BBox-Adapter adopts full fine-tuning for 3 epochs. During training, the candidate count is set to 5, max length to 1, batch size to 100, and the model operates in classification mode. The remaining parameters are kept consistent with the original paper. (3) CoBB utilizes LoRA rank 32 and alpha 64 for 5 epochs. The hyperparameter $\lambda$ is fixed at 0.1, while the training employs a batch size of 64 and a learning rate of $1 \times 10^{-5}$ , maintaining other parameters as in the original work. (4) For our proposed DICE framework, both SFT and GRPO stages adopt LoRA rank 32 and alpha 64. During the SFT stage, the SLM is trained for 2 epochs with batch size 64, learning rate $2 \times 10^{-4}$ , warmup ratio 0.1, and weight decay 0.1. In the subsequent GRPO stage, the hyperparameter $G$ is set to 16, the learning rate is reduced to $1 \times 10^{-5}$ , temperature is set to 0.8, batch size increases to 128, warmup ratio decreases to 0.05, and training for 1 additional epoch.
472
+
473
+ Ablation Study In the ablation study, we utilize Qwen2.5-72B-Instruct-GPTQ-Int4 as the LLM and Qwen2.5-1.5B-Instruct as the original SLM. All experiments employ LoRA with a rank of 32 and an alpha of 64, conducted at bf16 precision. In the SFT-only setting, we set the learning rate to 2e-4, batch size to 64, warmup ratio and weight decay to
474
+
475
+ 0.1, and train for 3 epochs. In the GRPO-only setting, we set the hyperparameter $G$ to 16, learning rate to 2e-5, batch size to 128, temperature to 0.9, and also train for 3 epochs. All other hyperparameters remain consistent with those used in the main experiments.
476
+
477
+ # E Correctness Ratio of $y_{o}$ Ablation Experiment Result
478
+
479
+ The content accuracy of models trained with different correctness ratio of $y_{o}$ and inferred with different LLMs are illustrated in Table 6.
480
+
481
+ <table><tr><td>Ratio</td><td>100%</td><td>75%</td><td>50%</td><td>25%</td></tr><tr><td>Qwen2.5-72B + DICE</td><td>79.0</td><td>79.4</td><td>78.0</td><td>74.6</td></tr><tr><td>Qwen2.5-7B + DICE</td><td>71.8</td><td>70.6</td><td>72.0</td><td>67.8</td></tr><tr><td>Llama3-8B + DICE</td><td>33.6</td><td>33.0</td><td>34.4</td><td>34.0</td></tr><tr><td>GPT-4.1-mini + DICE</td><td>67.8</td><td>68.8</td><td>70.8</td><td>65.4</td></tr><tr><td>Average</td><td>63.1</td><td>63.0</td><td>63.8</td><td>60.5</td></tr></table>
482
+
483
+ Table 6: Ablation experiments on the correctness ratio of $y_{o}$ . The models' name Qwen2.5-72B, Qwen2.5-7B, and Llama3-8B are short for Qwen2.5-72B-Instruct-GPTQ-Int4, Qwen2.5-7B-Instruct, and Meta-Llama-3-8B-Instruct.
484
+
485
+ # MATH
486
+
487
+ # Sample
488
+
489
+ Question: Evaluate \\left\\lceil3\\left(6-\\frac{12}{\text{right}}\right)\\right\\rceil\$
490
+
491
+ Answer: Firstly, $3\left\{\text{left}(6-\text{frac12}\right\} = 18 - 1 - \text{frac12} = 17 - \text{frac12} \\ = 17 - \text{frac12} \\ = 17 - \text{frac12} \\ = 17 - \text{frac12} \\ = 17 - \text{frac12} \\ = 17 - \text{frac12} \\ = 17 - \text{frac12} \\ = 17 - \text{frac12} \\ = 17 - \text{boxed} \{17\} \\ = 17 - \text{boxed} \{17\} \\ = 17 - \text{boxed} \{17\} \\ = 17 - \text{boxed} \{17\} \\ = 17 - \text{boxed} \{17\} \\ = 17 - \text{boxed} \{17\} \\ = 17 - \text{boxed} \{
492
+
493
+ # - Structured Output
494
+
495
+ ```
496
+ ```
497
+ xml format\n<?xml version="1.0" encoding="UTF-8"?> \n<Answer>\n<Step by step reasoning>{"step1": "Firstly, $3\leftarrow left(6-\frac{1}{2}\frac{1}{4}\frac{1}{2}\frac{1}{4}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{5}{4}\}
498
+ ```
499
+ ```
500
+ \xfrac12=17-\frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{1}{2} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdots} \frac{\cdots}{\cdot}
501
+ ```
502
+ ```
503
+ Reasoning> \n<Final answer>17</Final answer> \n</Answer> \n
504
+
505
+ ```
506
+ ```
507
+ ```
508
+ ```
509
+ ```
510
+ ```
511
+ ```
512
+ ```
513
+ ```
514
+ ```
515
+ ```
516
+ ```
517
+ ```
518
+ ```
519
+ ```
520
+ ```
521
+ ```
522
+ ```
523
+ ```
524
+ ```
525
+ ```
526
+ ```
527
+ ```
528
+ ```
529
+ ```
530
+ ```
531
+ ```
532
+ ```
533
+ ```
534
+ ```
535
+ ```
536
+ ```
537
+ ```
538
+ ```
539
+ ```
540
+ ```
541
+ ```
542
+ ``"
543
+ ``"
544
+ ``"
545
+ ``"
546
+ ``"
547
+ ``"
548
+ ``"
549
+ ``"
550
+ ``"
551
+ ``"
552
+ ``"
553
+ ``"
554
+ ``"
555
+ ``"
556
+ ``"
557
+ ``"
558
+ ``"
559
+ ``"
560
+ ``"
561
+ ``"
562
+ ``"
563
+ ``"
564
+ ``"
565
+ ``"
566
+ ``"
567
+ ``"
568
+ ``"
569
+ ``"
570
+ ``"
571
+ ``"
572
+ ``"
573
+ ``"
574
+ ``"
575
+ ``"
576
+ ``"
577
+ ``"
578
+ ``"
579
+ ``"
580
+ ``"
581
+ ``"
582
+ ``"
583
+ ``"
584
+ ``"
585
+ ``"
586
+ ``"
587
+ ``"
588
+ ``"
589
+ ``"
590
+ ``"
591
+ ``"
592
+ ``'
593
+ ``"
594
+ ``"
595
+ ``"
596
+ ``"
597
+ ``"
598
+ ``"
599
+ ``"
600
+ ``"
601
+ ``"
602
+ ``"
603
+ ``"
604
+ ``"
605
+ ``"
606
+ ``"
607
+ ``"
608
+ ``"
609
+ ``"
610
+ ``"
611
+ ``"
612
+ ``"
613
+ ``"
614
+ ``"
615
+ ``"
616
+ ``"
617
+ ``"
618
+ ``"
619
+ ``"
620
+ ``"
621
+ ``"
622
+ ``"
623
+ ``"
624
+ ``"
625
+ ``
626
+ ```
627
+
628
+ ```
629
+ ```
630
+ ```
631
+ ```
632
+ ```
633
+ ```
634
+ ```
635
+ ```
636
+ ```
637
+ ```
638
+ ```
639
+ ```
640
+ ```
641
+ ```
642
+ ```
643
+ ```
644
+ ```
645
+ ```
646
+ ```
647
+ ```
648
+ ```
649
+ ```
650
+ ```
651
+ ```
652
+ ```
653
+ ```
654
+ ```
655
+ ```
656
+ ```
657
+ ```
658
+ ```
659
+ ```
660
+ ```
661
+ ```
662
+ ```
663
+ ```
664
+ ``'
665
+ ```
666
+ ``"
667
+ ``"
668
+ ``"
669
+ ``"
670
+ ``"
671
+ ``"
672
+ ``"
673
+ ``"
674
+ ``"
675
+ ``"
676
+ ``"
677
+ ``"
678
+ ``"
679
+ ``"
680
+ ``"
681
+ ``"
682
+ ``"
683
+ ``"
684
+ ``"
685
+ ``"
686
+ ``"
687
+ ``"
688
+ ``"
689
+ ``"
690
+ ``"
691
+ ``"
692
+ ``"
693
+ ``"
694
+ ``"
695
+ ``"
696
+ ``"
697
+ ``"
698
+ ``"
699
+ ``"
700
+ ``"
701
+ ``"
702
+ ``"
703
+ ``"
704
+ ``"
705
+ ``"
706
+ ``"
707
+ ``"
708
+ ``"
709
+ ``"
710
+ ``"
711
+ ``"
712
+ ``"
713
+ ``"
714
+ ``"
715
+ ``"
716
+ ``)
717
+ ``"
718
+ ``"
719
+ ``"
720
+ ``"
721
+ ``"
722
+ ``"
723
+ ``"
724
+ ``"
725
+ ``"
726
+ ``"
727
+ ``"
728
+ ``"
729
+ ``"
730
+ ``"
731
+ ``"
732
+ ``"
733
+ ``"
734
+ ``"
735
+ ``"
736
+ ``"
737
+ ``"
738
+ ``"
739
+ ``"
740
+ ``"
741
+ ``"
742
+ ``
743
+ ```
744
+
745
+ # CommonsenseQA
746
+
747
+ # Sample
748
+
749
+ Question: Google Maps and other highway and street GPS services have replaced what?
750
+
751
+ Choices: A. united states, B. mexico, C. countryside, D.atlas, E.oceans
752
+
753
+ # Answer: D
754
+
755
+ LLM-generated Reasoning: The question is asking about what has been replaced by Google Maps and other GPS services. We need to analyze each option to determine which one is most likely to be replaced by GPS services. A. United States: This is a country, not a tool or service that could be replaced by GPS. B. Mexico: This is also a country, not a tool or service. C. Countryside: This refers to rural areas, not a tool or service. D. Atlas: An atlas is a collection of maps, which is a tool used for navigation and understanding geographic locations. E. Oceans: This refers to large bodies of water, not a tool or service. The most logical answer is that GPS services have replaced atlases.
756
+
757
+ # - Structured Output
758
+
759
+ ``xml format\n<?xml version="1.0" encoding="UTF-8"?>\n<Answer>\n<Step by Google Maps and other GPS services.", "step2": "We need to analyze each option to determine which one is most likely to be replaced by GPS services.", "step3": "A. United States: This is a country, not a tool or service that could be replaced by GPS.", "step4": "B. Mexico: This is also a country, not a tool or service.", "step5": "C. Countryside: This refers to rural areas, not a tool or service.", "step6": "D. Atlas: An atlas is a collection of maps, which is a tool used for navigation and understanding geographic locations.", "step7": "E. Oceans: This refers to large bodies of water, not a tool or service.", "step8": "The most logical answer is that GPS services have replaced atlases.", </Step by step reasoning>\n<Answer label>D</Answer label>\n<Result>atlas</Result>\n</Answer>\n
760
+
761
+ ```
762
+ ```
763
+ ```
764
+ ```
765
+ ```
766
+ ```
767
+ ```
768
+ ```
769
+ ```
770
+ ```
771
+ ```
772
+ ```
773
+ ```
774
+ ```
775
+ ```
776
+ ```
777
+ ```
778
+ ```
779
+ ```
780
+ ```
781
+ ```
782
+ ```
783
+ ```
784
+ ```
785
+ ```
786
+ ```
787
+ ```
788
+ ```
789
+ ```
790
+ ```
791
+ ```
792
+ ```
793
+ ```
794
+ ```
795
+ ```
796
+ ```
797
+ ```
798
+ ``"
799
+ ``"
800
+ ``"
801
+ ``"
802
+ ``"
803
+ ``"
804
+ ``"
805
+ ``"
806
+ ``"
807
+ ``"
808
+ ``"
809
+ ``"
810
+ ``"
811
+ ``"
812
+ ``"
813
+ ``"
814
+ ``"
815
+ ``"
816
+ ``"
817
+ ``"
818
+ ``"
819
+ ``"
820
+ ``"
821
+ ``"
822
+ ``"
823
+ ``"
824
+ ``"
825
+ ``"
826
+ ``"
827
+ ``"
828
+ ``"
829
+ ``"
830
+ ``"
831
+ ``"
832
+ ``"
833
+ ``"
834
+ ``"
835
+ ``"
836
+ ``"
837
+ ``"
838
+ ``"
839
+ ``"
840
+ ``"
841
+ ``"
842
+ ``"
843
+ ``"
844
+ ``"
845
+ ``"
846
+ ``"
847
+ ``"
848
+ ``'
849
+ ``"
850
+ ``"
851
+ ``"
852
+ ``"
853
+ ``"
854
+ ``"
855
+ ``"
856
+ ``"
857
+ ``"
858
+ ``"
859
+ ``"
860
+ ``"
861
+ ``"
862
+ ``"
863
+ ``"
864
+ ``"
865
+ ``"
866
+ ``"
867
+ ``"
868
+ ``"
869
+ ``"
870
+ ``"
871
+ ``"
872
+ ``"
873
+ ``"
874
+ ``"
875
+ ``"
876
+ ``"
877
+ ``"
878
+ ``"
879
+ ``"
880
+ ``"
881
+ ``"
882
+ ``"
883
+ ``"
884
+ ``"
885
+ ``"
886
+ ``"
887
+ ``"
888
+ ``
889
+ ```
890
+
891
+ ```
892
+ ```
893
+ ```
894
+ ```
895
+ ```
896
+ ```
897
+ ```
898
+ ```
899
+ ```
900
+ ```
901
+ ```
902
+ ```
903
+ ```
904
+ ```
905
+ ```
906
+ ```
907
+ ```
908
+ ```
909
+ ```
910
+ ```
911
+ ```
912
+ ```
913
+ ```
914
+ ```
915
+ ```
916
+ ```
917
+ ```
918
+ ```
919
+ ```
920
+ ```
921
+ ``
922
+ ```
923
+ ``"
924
+ ``"
925
+ ``"
926
+ ``"
927
+ ``"
928
+ ``"
929
+ ``"
930
+ ``"
931
+ ``"
932
+ ``"
933
+ ``"
934
+ ``"
935
+ ``"
936
+ ``"
937
+ ``"
938
+ ``"
939
+ ``"
940
+ ``"
941
+ ``"
942
+ ``"
943
+ ``"
944
+ ``"
945
+ ``"
946
+ ``"
947
+ ``"
948
+ ``"
949
+ ``"
950
+ ``"
951
+ ``"
952
+ ``"
953
+ ``"
954
+ ``"
955
+ ``"
956
+ ``"
957
+ ``"
958
+ ``"
959
+ ``"
960
+ ``"
961
+ ``"
962
+ ``"
963
+ ``"
964
+ ``"
965
+ ``"
966
+ ``"
967
+ ``"
968
+ ``"
969
+ ``"
970
+ ``"
971
+ ``"
972
+ ``"
973
+ ``'
974
+ ``"
975
+ ``"
976
+ ``"
977
+ ``"
978
+ ``"
979
+ ``"
980
+ ``"
981
+ ``"
982
+ ``"
983
+ ``"
984
+ ``"
985
+ ``"
986
+ ``"
987
+ ``"
988
+ ``"
989
+ ``"
990
+ ``"
991
+ ``"
992
+ ``"
993
+ ``"
994
+ ``"
995
+ ``"
996
+ ``"
997
+ ``"
998
+ ``"
999
+ ``"
1000
+ ``"
1001
+ ``"
1002
+ ``"
1003
+ ``"
1004
+ ``"
1005
+ ``"
1006
+ ``"
1007
+ ``"
1008
+ ``"
1009
+ ``"
1010
+ ``"
1011
+ ``"
1012
+ ``"
1013
+ ``"
1014
+ ``"
1015
+ ``"
1016
+ ``"
1017
+ ``"
1018
+ ``"
1019
+ ``)
1020
+ ```yaml format\nstep_by_step Reasoning\n-step1: "The question is asking about what has been replaced by Google Maps and other GPS services."\n-step2: "We need to analyze each option to determine which one is most likely to be replaced by GPS services."\n-step3: "A. United States: This is a country, not a tool or service that could be replaced by GPS."\n-step4: "B. Mexico: This is also a country, not a tool or service."\n-step5: "C. Countryside: This refers to rural areas, not a tool or service."\n-step6: "D. Atlas: An atlas is a collection of maps, which is a tool used for navigation and understanding geographic locations."\n-step7: "E. Oceans: This refers to large bodies of water, not a tool or service."\n-step8: "The most logical answer is that GPS services have replaced atlases."\nanswer_label:\n-label: "D"\nresult:\n-answer: "atlas"\n"
1021
+
1022
+ # StrategyQA
1023
+
1024
+ # Sample
1025
+
1026
+ Question: Would a Monoamine Oxidase candy bar cheer up a depressed friend?
1027
+
1028
+ # Answer: False
1029
+
1030
+ Reasoning: Depression is caused by low levels of serotonin, dopamine and norepinephrine. Monoamine Oxidase breaks down neurotransmitters and lowers levels of serotonin, dopamine and norepinephrine.
1031
+
1032
+ # - Structured Output
1033
+
1034
+ ```
1035
+ ```
1036
+ xml format\n<?xml version="1.0" encoding="UTF-8"?>\n<Answer>\n<Step by step reasoning>{"step1":"Depression is caused by low levels of serotonin, dopamine and norepinephrine.", "step2":"Monoamine Oxidase breaks down neurotransmitters and lowers levels of serotonin, dopamine and norepinephrine.}"}</Step by step reasoning>\n<Final answer>False</Final answer>\n</Answer>
1037
+ ```
1038
+ ```
1039
+ ```
1040
+ ```
1041
+ ```
1042
+ ```
1043
+ ```
1044
+ ```
1045
+ ```
1046
+ ```
1047
+ ```
1048
+ ```
1049
+ ```
1050
+ ```
1051
+ ```
1052
+ ```
1053
+ ```
1054
+ ```
1055
+ ```
1056
+ ```
1057
+ ```
1058
+ ```
1059
+ ```
1060
+ ```
1061
+ ```
1062
+ ```
1063
+ ```
1064
+ ```
1065
+ ``'
1066
+ ```
1067
+ ``
1068
+ ```
1069
+ ``"
1070
+ ``"
1071
+ ``"
1072
+ ``"
1073
+ ``"
1074
+ ``"
1075
+ ``"
1076
+ ``"
1077
+ ``"
1078
+ ``"
1079
+ ``"
1080
+ ``"
1081
+ ``"
1082
+ ``"
1083
+ ``"
1084
+ ``"
1085
+ ``"
1086
+ ``"
1087
+ ``"
1088
+ ``"
1089
+ ``"
1090
+ ``"
1091
+ ``"
1092
+ ``"
1093
+ ``"
1094
+ ``"
1095
+ ``"
1096
+ ``"
1097
+ ``"
1098
+ ``"
1099
+ ``"
1100
+ ``"
1101
+ ``"
1102
+ ``"
1103
+ ``"
1104
+ ``"
1105
+ ``"
1106
+ ``"
1107
+ ``"
1108
+ ``"
1109
+ ``"
1110
+ ``"
1111
+ ``"
1112
+ ``"
1113
+ ``"
1114
+ ``"
1115
+ ``"
1116
+ ``"
1117
+ ``"
1118
+ ``"
1119
+ ``)
1120
+ ``"
1121
+ ``"
1122
+ ``"
1123
+ ``"
1124
+ ``"
1125
+ ``"
1126
+ ``"
1127
+ ``"
1128
+ ``"
1129
+ ``"
1130
+ ``"
1131
+ ``"
1132
+ ``"
1133
+ ``"
1134
+ ``"
1135
+ ``"
1136
+ ``"
1137
+ ``"
1138
+ ``"
1139
+ ``"
1140
+ ``"
1141
+ ``"
1142
+ ``"
1143
+ ``"
1144
+ ``"
1145
+ ``"
1146
+ ``"
1147
+ ``"
1148
+ ``"
1149
+ ``"
1150
+ ``"
1151
+ ```
1152
+ ```
1153
+ ```
1154
+ ```
1155
+ ```
1156
+ ```
1157
+ ```
1158
+ ```
1159
+ ```
1160
+ ```
1161
+ ```
1162
+ ```
1163
+ ```
1164
+ ```
1165
+ ```
1166
+ ```
1167
+ ```
1168
+ ```
1169
+ ```
1170
+ ```
1171
+ ```
1172
+ ```
1173
+ ```
1174
+ ```
1175
+ ```
1176
+ ```
1177
+ ```
1178
+ ```
1179
+ ```
1180
+ ``'
1181
+ ```
1182
+ ``"
1183
+ ``"
1184
+ ``"
1185
+ ``"
1186
+ ``"
1187
+ ``"
1188
+ ``"
1189
+ ``"
1190
+ ``"
1191
+ ``"
1192
+ ``"
1193
+ ``"
1194
+ ``"
1195
+ ``"
1196
+ ``"
1197
+ ``"
1198
+ ``"
1199
+ ``"
1200
+ ``"
1201
+ ``"
1202
+ ``"
1203
+ ``"
1204
+ ``"
1205
+ ``"
1206
+ ``"
1207
+ ``"
1208
+ ``"
1209
+ ``"
1210
+ ``"
1211
+ ``"
1212
+ ``"
1213
+ ``"
1214
+ ``"
1215
+ ``"
1216
+ ``"
1217
+ ``"
1218
+ ``"
1219
+ ``"
1220
+ ``"
1221
+ ``"
1222
+ ``"
1223
+ ``"
1224
+ ``"
1225
+ ``"
1226
+ ``"
1227
+ ``"
1228
+ ``"
1229
+ ``"
1230
+ ``"
1231
+ ``"
1232
+ ``
1233
+ ``"
1234
+ ``"
1235
+ ``"
1236
+ ``"
1237
+ ``"
1238
+ ``"
1239
+ ``"
1240
+ ``"
1241
+ ``"
1242
+ ``"
1243
+ ``"
1244
+ ``"
1245
+ ``"
1246
+ ``"
1247
+ ``"
1248
+ ``"
1249
+ ``"
1250
+ ``"
1251
+ ``"
1252
+ ``"
1253
+ ``"
1254
+ ``"
1255
+ ``"
1256
+ ``"
1257
+ ``"
1258
+ ``"
1259
+ ``"
1260
+ ``"
1261
+ ``"
1262
+ ``"
1263
+ ``"
1264
+ ``"
1265
+ ``"
1266
+ ``)
1267
+ ```
1268
+
1269
+ Figure 7: Examples from MATH, CommonsenseQA, and StrategyQA. The original CommonsenseQA dataset does not contain reasoning information; therefore, we instruct the LLM to generate reasoning.
2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f0710056deeecbb8cd447775629c0302a5204e2f71a2078bff5eceee36cb858
3
+ size 612640
2025/DICE_ Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/997032d8-3fb5-4fb3-b06b-f18b26ee4b5c_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/997032d8-3fb5-4fb3-b06b-f18b26ee4b5c_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/997032d8-3fb5-4fb3-b06b-f18b26ee4b5c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c61436085559cd16d240782902bfd745198e65de0f2509926de4348cb2fd776
3
+ size 634446
2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/full.md ADDED
@@ -0,0 +1,669 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DIDS: Domain Impact-aware Data Sampling for Large Language Model Training
2
+
3
+ Weijie Shi $^{1*†}$ , Jipeng Zhang $^{1*}$ , Yaguang Wu $^{2}$ , Jingzhi Fang $^{1}$ , Shibo Zhang $^{2}$ , Yao Zhao $^{3}$ , Hao Chen $^{1}$ , Ruiyuan Zhang $^{1}$ , Yue Cui $^{1}$ , Jia Zhu $^{4}$ , Sirui Han $^{1‡}$ , Jiajie Xu $^{5}$ , Xiaofang Zhou $^{1‡}$
4
+
5
+ <sup>1</sup>The Hong Kong University of Science and Technology, <sup>2</sup>MetaX, <sup>3</sup>Alibaba Group, <sup>4</sup>Zhejiang Normal University, <sup>5</sup>Soochow University
6
+
7
+ # Abstract
8
+
9
+ Large language models (LLMs) are commonly trained on multi-domain datasets, where domain sampling strategies significantly impact model performance due to varying domain importance across downstream tasks. Existing approaches for optimizing domain-level sampling strategies struggle with maintaining intradomain consistency and accurately measuring domain impact. In this paper, we present Domain Impact-aware Data Sampling (DIDS). To ensure intra-domain consistency, a gradient clustering algorithm is proposed to group training data based on their learning effects, where a proxy language model and dimensionality reduction are employed to reduce computational overhead. To accurately measure domain impact, we develop a Fisher Information Matrix (FIM) guided metric that quantifies how domain-specific parameter updates affect the model's output distributions on downstream tasks, with theoretical guarantees. Furthermore, to determine optimal sampling ratios, DIDS combines both the FIM-guided domain impact assessment and loss learning trajectories that indicate domain-specific potential, while accounting for diminishing marginal returns. Extensive experiments demonstrate that DIDS achieves $3.4\%$ higher average performance while maintaining comparable training efficiency. The code is available at https://github.com/shiweijiezero/DIDS.
10
+
11
+ # 1 Introduction
12
+
13
+ Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks through training on massive multi-domain datasets, enabling robust generalization and adaptation abilities (Weber et al., 2024; Biderman et al., 2023; Chen, 2024; Azaria et al., 2024; Zhang et al., 2024). While the composition of training data
14
+
15
+ (e.g., code, scientific papers, web text) significantly shapes model performance, their relative importance varies substantially with respect to target applications. On the one hand, some data domains contribute positively to model performance, whereas others may even impair effectiveness and waste computational resources (Xia et al., 2024; Zhou et al., 2024). On the other hand, each domain's contribution to model learning evolves dynamically throughout the training process (Luo et al., 2024; Kang et al., 2024). This necessitates an approach for optimizing domain-level data sampling strategies during LLM training to maximize performance across downstream tasks while maintaining training efficiency. Unfortunately, designing such an algorithm presents several crucial challenges.
16
+
17
+ Intra-domain Consistency. A fundamental prerequisite for effective domain-level sampling strategies is maintaining data consistency within each domain. Existing approaches either rely on data source categorization (Xie et al., 2024; Fan et al., 2024b) or employ BERT semantic clustering (Fan et al., 2024a). However, these methods often fail to ensure that data within each domain has similar training effects, which is crucial for making domain-level sampling strategies meaningful. To address this limitation, gradient information serves as a more direct measure of training impact. Gradients inherently capture how each data point influences model parameters during training, enabling us to group samples based on their learning effects rather than superficial characteristics.
18
+
19
+ Domain Impact and Mixing Strategy. The next key challenge lies in accurately measuring each domain's impact on downstream tasks throughout the dynamic training process. Unfortunately, existing grid search methods (Ye et al., 2024; Liu et al., 2024b; McKinzie et al., 2025) are computationally intensive and cannot adapt to the dynamic of domain importance during training,
20
+
21
+ while gradient similarity approaches (Fan et al., 2024b,a) only measure the instantaneous parameter update direction alignment without considering how these updates actually affect the model's predictive behavior on downstream tasks. To quantify such influence in a principled way, a natural objective is minimizing the output distributional discrepancy between how different domains' updates shift the model's predictions. Beyond measuring impact, determining optimal sampling ratios requires balancing computation resources across all downstream tasks while considering the marginal utility of domain data, as domain-specific capabilities may saturate with diminishing returns over time.
22
+
23
+ In this paper, we propose Domain Impact-aware Data Sampling (DIDS), which dynamically optimizes domain-level sampling probability by measuring domains' impact on model's predictive behavior. To ensure intra-domain consistency, a gradient clustering algorithm is proposed to group training data, where a small proxy language model is employed instead of the full-size model to reduce computation cost, followed by gradient norm-based subsampling and Johnson-Lindenstrauss random projection for dimensionality reduction. To accurately measure domain impact, a Fisher Information Matrix (FIM) guided metric is developed to quantify the output distributional shift based on the second-order Taylor approximation of KL divergence, enabling efficient assessment of how each domain affects the model's predictive behavior on downstream tasks. We also provide theoretical foundations for the FIM-guided metric. To determine domain sampling proportions, weights are computed by combining both the FIM-guided domain impact on downstream tasks and their loss improvement trajectories indicating learning potential. Extensive experiments on Llama-3.1 across 9 downstream tasks demonstrate that DIDS achieves $3.4\%$ higher average performance. Our contributions are summarized as follows:
24
+
25
+ - We present a gradient-based data clustering that leverages proxy models and dimensionality reduction to group training samples, ensuring intra-domain training consistency.
26
+ - We propose a FIM-guided impact metric that measures how domain-specific parameter updates shift model's output distributions on downstream tasks, enabling accurate assessment of domain importance during training
27
+
28
+ with theoretical foundations.
29
+
30
+ - We design DIDS, a domain sampling framework that dynamically adjusts mixing ratios by combining domain impact with learning trajectories, accounting for diminishing marginal returns of domain-specific performance.
31
+
32
+ # 2 Related Work
33
+
34
+ # 2.1 Instance-level Data Sampling
35
+
36
+ Instance-level data sampling approaches for language model training primarily focus on selecting high-quality training samples that maximize model performance. LIMA (Zhou et al., 2024) demonstrates that a small set of 1,000 carefully curated prompts can achieve strong performance, Ge et al. (2024) ensures both quality and diversity through BERT-based scoring and clustering, and DEITA (Liu et al., 2023) further considers instruction complexity by ChatGPT. Moreover, to align requirements of the specific downstream task, DSIR (Xie et al., 2023) utilizes N-gram feature-based importance resampling, while LESS (Xia et al., 2024) and TracIn (Pruthi et al., 2020) leverage gradient-based methods to identify influential training samples through gradient alignment and descent tracing. However, these approaches either lack downstream task awareness or are computationally expensive, motivating domain-level sampling strategies.
37
+
38
+ # 2.2 Domain-level Data Sampling
39
+
40
+ Domain-level data sampling strategies can be categorized into static and online methods. Static methods determine fixed sampling ratios using proxy models before full-scale training begins. MM1 (McKinzie et al., 2025) employs grid search to evaluate different sampling ratios empirically, while Mixing Laws (Ye et al., 2024) extends this by proposing scaling law formulas to model the relationship between mixing ratios and model performance. REGMIX (Liu et al., 2024b) introduces regression models to predict this scaling curve. Moreover, Doremi (Xie et al., 2024) incorporates reference models to consider excess loss, and Doge (Fan et al., 2024b) utilizes gradient alignment between training and validation sets. However, AUTOSCALE (Kang et al., 2024) reveals that optimal mixing ratios derived from proxy models may not transfer effectively to larger models. Thus, online
41
+
42
+ ![](images/bc360305bd85c657e73551f782f027b9b67f66e085b8e6a40d59dfb50cf496d6.jpg)
43
+
44
+ ![](images/d88eb68a82970e288a7d679944e5d92cf1c8e8840c62a691721124fc4683ad04.jpg)
45
+ Step 2: Domain Impact Matrix
46
+
47
+ ![](images/df255817f63da0272800690e42e06a2d5cfbfc45a8077adc3f025b39579c2a1c.jpg)
48
+ Step 3: Update Sampling Vector
49
+ Figure 1: Overview of DIDS's three-step process: (1) Domain repartition using gradient-based clustering with a proxy model and dimensionality reduction through random projection, (2) Domain impact measurement using FIM-guided metrics that quantify how domain-specific parameter updates affect model's output distributions on downstream tasks, and (3) Dynamic sampling probability updates that combine both FIM-guided domain impact assessment and loss learning trajectories to account for diminishing marginal returns.
50
+
51
+ methods directly adjust sampling ratios throughout the training process. DGA (Fan et al., 2024a) extends Doge's gradient-based approach to online scenarios, while Velocitune (Luo et al., 2024) monitors learning velocity to adaptively adjust domain proportions.
52
+
53
+ Moreover, DRPruning (Deng et al., 2024) employs distributionally robust optimization to iteratively shift data distribution toward underperforming domains during training, ensuring balanced recovery across all areas rather than allowing some domains to lag behind after model pruning. It shares our motivation for adaptive domain reweighting but focuses specifically on post-pruning recovery scenarios. DDK (Liu et al., 2024a) computes perplexity ratios between teacher and student models across domains and uses factor-smooth updating mechanisms to periodically adjust sampling probabilities. DDK allocates more training data to domains where the student model underperforms relative to the teacher, thereby reducing performance gaps during knowledge distillation.
54
+
55
+ Yet existing methods either rely on gradient similarity alone without capturing downstream impact, or use computationally expensive techniques like Scaling Law, limiting their practicality. This motivates our efficient, theoretically grounded approach to dynamic domain-level sampling.
56
+
57
+ # 3 Problem Formulation
58
+
59
+ In this part, we formalize the problem of optimizing domain-level sampling strategy for LLM training.
60
+
61
+ Consider $\mathcal{D} = \{D_1, \dots, D_k\}$ denote a train-
62
+
63
+ ing dataset comprising $k$ disjoint domains and $S = \{S_{1},\dots,S_{m}\}$ represent a collection of downstream tasks. Given a large language model $f_{\theta}$ parameterized by $\theta$ and a computational budget of $n$ training instances, our goal is to optimize the model's performance across all tasks by adjusting the sampling probabilities across different domains during parameter training.
64
+
65
+ We characterize the domain sampling strategy through a probability vector $\mathbf{p}_t = [p_{t,1},\dots,p_{t,k}]$ at each training step $t$ , where $p_{t,i}$ represents the sampling probability from domain $D_{i}$ subject to the $(k - 1)$ -dimensional probability simplex $\Pi^{k - 1}$ :
66
+
67
+ $$
68
+ \mathbf {p} _ {t} \in \Pi^ {k - 1} = \left\{p _ {t, i} \geq 0, \sum_ {i = 1} ^ {k} p _ {t, i} = 1 \right\} \tag {1}
69
+ $$
70
+
71
+ The objective of the training process follows a bi-level optimization framework to optimize both model parameters $\theta$ and sampling probabilities $\mathbf{p}$ :
72
+
73
+ $$
74
+ \max _ {\theta , \mathbf {p} \in \Pi^ {k - 1}} \sum_ {j = 1} ^ {m} \operatorname {A c c} _ {j} \left(f _ {\theta}; S _ {j}\right) \tag {2}
75
+ $$
76
+
77
+ where $\mathrm{Acc}_j(f_\theta; S_j)$ measures the model's accuracy on downstream task $S_j$ .
78
+
79
+ To update the model parameters, we perform standard gradient descent:
80
+
81
+ $$
82
+ \theta_ {t + 1} = \theta_ {t} - \eta \nabla \ell \left(\theta_ {t}, \mathcal {B} _ {t}\right), \quad \mathcal {B} _ {t} \sim \mathbf {p} _ {\mathbf {t}} \tag {3}
83
+ $$
84
+
85
+ where $\mathcal{B}_t$ denotes a mini-batch sampled according to the domain sampling probabilities $\mathbf{p}_t$ , $\eta$ denotes
86
+
87
+ the learning rate, and $\nabla \ell$ computes the loss gradients with respect to the model parameters.
88
+
89
+ To update the domain sampling probabilities, we periodically adjust the sampling distribution every $\tau$ steps to optimize the expected model performance across all downstream tasks:
90
+
91
+ $$
92
+ \mathbf {p} _ {t} = \underset {\mathbf {p} \in \Pi^ {k - 1}} {\arg \max } \sum_ {j = 1} ^ {m} \operatorname {A c c} _ {j} \left(f _ {\theta_ {t + \tau}}; S _ {j}\right) \tag {4}
93
+ $$
94
+
95
+ where $\theta_{t + \tau}$ represents the model parameters after $\tau$ steps of training using sampling distribution $\mathbf{p}_t$ .
96
+
97
+ # 4 Methodology
98
+
99
+ # 4.1 Gradient-based Domain Repartition
100
+
101
+ Effective domain-level sampling strategies require consistent training behavior within each domain. Traditional approaches to domain partitioning typically rely on superficial characteristics, such as data sources or semantic similarity measured through BERT embeddings. However, these methods often fail to capture how different training samples actually influence model learning. For instance, mathematical proofs and programming implementations, despite being traditionally categorized into different domains, often induce similar gradient patterns during training due to their shared logical reasoning nature. Conversely, two web documents from the same domain might trigger drastically different parameter updates. To better organize the training data, a gradient-based domain repartitioning is suitable to directly reflect parameter update behaviors.
102
+
103
+ Unfortunately, computing and clustering gradients using a full-size LLM for all samples would be computationally prohibitive. A small proxy model maintaining the same architecture but with reduced width and depth serves as an efficient alternative. For each training sample $x_{i} \in \mathcal{D}$ , gradient computation yields vector $g_{i}$ through $\nabla \ell(\theta', x_{i})$ . Here we only keep the last 10% gradients to accelerate computation. To make clustering computationally feasible, gradient norm-based subsampling retains only the top-k elements with the largest magnitudes in each gradient vector. Next, dimensionality reduction is performed via Johnson-Lindenstrauss random projection (Park et al., 2023) to compress the gradient vectors from parameter-scale dimensionality (millions-level) to a clustering-manageable dimension (thousands-level):
104
+
105
+ $$
106
+ \tilde {g} _ {i} = R ^ {T} g _ {i}, \quad R \in \mathbb {R} ^ {h \times s} \tag {5}
107
+ $$
108
+
109
+ where $h$ represents the original dimension and $s$ denotes the target dimension satisfying $s \ll h$ . The random projection matrix $R$ is initialized by randomly orthogonal matrices. The detailed Johnson-Lindenstrauss theorem and initialization methods are provided in Appendix A.
110
+
111
+ Building upon initial semantic categorization, k-means clustering on these reduced gradient vectors refines each domain, where the number of clusters serves as a hyperparameter. The resulting domains are denoted as $\mathcal{D} = \{D_1,\dots ,D_k\}$ , where $k$ represents the total number of domains.
112
+
113
+ # 4.2 FIM-guided Domain Impact
114
+
115
+ After establishing consistent domain partitions, a key challenge is accurately measuring how each domain's training data impacts model performance on downstream tasks. Existing approaches either rely on computationally expensive grid search methods that cannot adapt to dynamic training processes, or use gradient similarity metrics. For example, DGA (Fan et al., 2024a) measures the domain impact on specific downstream tasks as:
116
+
117
+ $$
118
+ I \left(D _ {i}, S _ {j}\right) = \mathbb {E} _ {x _ {i} \sim D _ {i}, x _ {j} \sim S _ {j}} \left[ \left\langle \nabla \ell \left(\theta^ {t}, x _ {i}\right), \nabla \ell \left(\theta^ {t}, x _ {j}\right) \right\rangle \right] \tag {6}
119
+ $$
120
+
121
+ where $I(D_{i},S_{j})$ measures the impact of training domain $D_{i}$ on downstream task $S_{j}$ , $\langle \nabla \ell (\theta^t,x_i),\nabla \ell (\theta^t,x_j)\rangle$ represents the inner product of gradients. However, they only capture instantaneous parameter update directions without considering their actual effects on model behavior. We need a more principled approach that can efficiently quantify how domain-specific parameter updates influence the model's predictive distributions on target tasks.
122
+
123
+ To this end, we propose a Fisher Information Matrix (FIM) guided metric that quantifies the output distributional changes induced by domain-specific data. The core insight is that the Kullback-Leibler (KL) divergence between the original and updated model predictions provides a natural measure of how parameter updates affect model behavior. Due to the intractable nature of direct KL divergence computation in infinite input spaces, here we employ a second-order Taylor approximation.
124
+
125
+ For notational simplicity, let $p(y|\theta)$ be denoted as $p(\theta),\theta_{D_i} = \theta +\nabla \ell_{D_i}$ and $\theta_{S_j} = \theta +\nabla \ell_{S_j}$ represent the parameters after updates from domain $D_{i}$ and task $S_{j}$ respectively, and $\Delta = \nabla \ell_{S_j} - \nabla \ell_{D_i}$ represent the gradient difference between downstream task updates and training domain updates.
126
+
127
+ Formally, we define the domain impact metric as:
128
+
129
+ $$
130
+ \begin{array}{l} I (D _ {i}, S _ {j}) = \mathrm {K L} [ p (\theta_ {D _ {i}}) \parallel p (\theta_ {S _ {j}}) ] \\ = \int p (\theta_ {D _ {i}}) \log \frac {p (\theta_ {D _ {i}})}{p (\theta_ {S _ {j}})} d y \\ = \mathbb {E} _ {p \left(\theta_ {D _ {i}}\right)} \left[ \log p \left(\theta_ {D _ {i}}\right) \right] - \mathbb {E} _ {p \left(\theta_ {D _ {i}}\right)} \left[ \log p \left(\theta_ {S _ {j}}\right) \right] \tag {7} \\ \end{array}
131
+ $$
132
+
133
+ When the gradient updates are small (i.e., $\nabla \ell_{D_i} \approx \nabla \ell_{S_j} \approx 0$ ), we can approximate using second-order Taylor expansion around $\theta_{D_i}$ as:
134
+
135
+ $$
136
+ \begin{array}{l} \mathrm {K L} [ p (\theta_ {D _ {i}}) | | p (\theta_ {S _ {j}}) ] \approx \mathbb {E} _ {p (\theta_ {D _ {i}})} [ \log p (\theta_ {D _ {i}}) ] - \mathbb {E} _ {p (\theta_ {D _ {i}})} [ \\ \log p \left(\theta_ {D _ {i}}\right) + \nabla \log p \left(\theta_ {D _ {i}}\right) \Delta + \frac {1}{2} \Delta^ {T} \nabla^ {2} \log p \left(\theta_ {D _ {i}}\right) \Delta ] \\ = - \mathbb {E} _ {p \left(\theta_ {D _ {i}}\right)} \left[ \nabla \log p \left(\theta_ {D _ {i}}\right) \Delta \right] \\ - \mathbb {E} _ {p \left(\theta_ {D _ {i}}\right)} \left[ \frac {1}{2} \Delta^ {T} \nabla^ {2} \log p \left(\theta_ {D _ {i}}\right) \Delta \right] \tag {8} \\ \end{array}
137
+ $$
138
+
139
+ The first term can be simplified through integration-differentiation interchange:
140
+
141
+ $$
142
+ \begin{array}{l} \mathbb {E} _ {p (\boldsymbol {\theta} _ {D _ {i}})} [ \nabla \log p (\boldsymbol {\theta} _ {D _ {i}}) \Delta ] = \int_ {\boldsymbol {\theta} _ {D _ {i}}} \frac {\nabla p (\boldsymbol {\theta} _ {D _ {i}})}{p (\boldsymbol {\theta} _ {D _ {i}})} p (\boldsymbol {\theta} _ {D _ {i}}) \Delta d \boldsymbol {\theta} _ {D _ {i}} \\ = \nabla \int_ {\theta_ {D _ {i}}} p \left(\theta_ {D _ {i}}\right) d \theta_ {D _ {i}} \cdot \Delta \\ = \nabla (1) \cdot \Delta = 0 \tag {9} \\ \end{array}
143
+ $$
144
+
145
+ For the second term, the expected Hessian of the negative log-likelihood is equivalent to Fisher Information Matrix:
146
+
147
+ $$
148
+ \begin{array}{l} \mathbb {E} _ {p (\boldsymbol {\theta} _ {D _ {i}})} [ \nabla^ {2} \log p (\boldsymbol {\theta} _ {D _ {i}}) ] = \mathbb {E} _ {p (\boldsymbol {x} | \boldsymbol {\theta} _ {D _ {i}})} [ \mathbf {H} _ {\log p (\boldsymbol {x} | \boldsymbol {\theta} _ {D _ {i}})} ] \\ = - F \tag {10} \\ \end{array}
149
+ $$
150
+
151
+ Considering that the FIM for LLMs is extremely large and cannot be computed at $\theta_{D_i}$ since the model has not been updated, we instead use diagonal approximation at $\theta$ in practice:
152
+
153
+ $$
154
+ \mathrm {F} \approx \mathbb {E} [ \nabla \log p (\theta) \odot \nabla \log p (\theta) ] \tag {11}
155
+ $$
156
+
157
+ Note that FIM only measures the local geometry of the parameter space, and the difference between using FIM at $\theta_{D_i}$ and $\theta$ is negligible when the gradient updates are small. Afterward, the domain
158
+
159
+ impact metric could be rewritten as:
160
+
161
+ $$
162
+ \begin{array}{l} I \left(D _ {i}, S _ {j}\right) = \mathrm {K L} \left[ p \left(\theta_ {D _ {i}}\right) \| p \left(\theta_ {S _ {j}}\right) \right] \\ = - \mathbb {E} _ {p (\theta_ {D _ {i}})} \left[ \frac {1}{2} \Delta^ {T} \nabla^ {2} \log p (\theta_ {D _ {i}}) \Delta \right] \\ = \frac {1}{2} \Delta^ {T} F \Delta \tag {12} \\ \end{array}
163
+ $$
164
+
165
+ This quadratic form captures how the difference in gradient updates affects the model's output distribution, weighted by the FIM which characterizes the local geometry of the parameter space. The complexity analysis is provided in Section 5.4.
166
+
167
+ # 4.3 Dynamic Domain Sampling Strategy
168
+
169
+ Building upon the FIM-guided domain impact measurement, a dynamic sampling strategy is proposed to optimize domain mixing ratios by considering both current learning progress and future potential. The sampling probability for each domain is updated periodically using a combination of three key components:
170
+
171
+ Current Performance Impact. To identify valuable domains that can achieve larger performance improvements with lower sampling probabilities, we compute a utility score for each domain $D_{i}$ and downstream task $S_{j}$ that measures the domain's effectiveness in improving task performance:
172
+
173
+ $$
174
+ U \left(D _ {i}, S _ {j}\right) = \frac {I \left(D _ {i} , S _ {j}\right) \cdot l _ {c}}{p _ {t - 1 , i}} \tag {13}
175
+ $$
176
+
177
+ where $I(D_{i},S_{j})$ is the normalized FIM-guided impact score, $l_{c}$ represents the loss improvement on task $S_{j}$ between consecutive updates $\Delta L(S_j)$ , and $p_{t - 1,i}$ is the previous sampling probability for domain $D_{i}$ .
178
+
179
+ Future Potential Estimation. To account for the diminishing returns in domain-specific learning and prioritize unsaturated domains, we introduce a potential factor $l_{p}$ that estimates future improvement opportunities. Given the loss history $l_{1},\dots,l_{t}$ for each downstream task, we fit an exponential decay model<sup>1</sup>, which is a typical pattern for learning curves:
180
+
181
+ $$
182
+ l _ {t} = a e ^ {- b t} + c \tag {14}
183
+ $$
184
+
185
+ where parameters $a, b,$ and $c$ are estimated using curve fitting. The potential factor $l_{p}$ is then computed as the difference between current loss and
186
+
187
+ predicted future loss:
188
+
189
+ $$
190
+ l _ {p} = l _ {t} - l _ {t + \tau} \tag {15}
191
+ $$
192
+
193
+ where $\tau$ represents the prediction window size.
194
+
195
+ Sampling Probability Update. The final sampling probabilities are updated using an exponential moving average (EMA) to maintain stability:
196
+
197
+ $$
198
+ p _ {t, i} = \beta p _ {t - 1, i} + (1 - \beta) \left(\frac {\sum_ {j} I \left(D _ {i} , S _ {j}\right) \cdot \left(l _ {c} + l _ {p}\right)}{p _ {t - 1 , i}}\right) \tag {16}
199
+ $$
200
+
201
+ where $\beta$ is the EMA momentum coefficient, $l_{c}$ represents the current loss improvement, and $l_{p}$ is the estimated potential factor. A softmax normalization ensures valid probability distribution while the division by previous probabilities implements importance sampling correction. The complete algorithm is summarized in Appendix C.
202
+
203
+ # 5 Experiments
204
+
205
+ # 5.1 Experimental Setup
206
+
207
+ # 5.1.1 Datasets and Tasks
208
+
209
+ We utilize the Tulu-3 (Lambert et al., 2024) posttraining dataset containing 939,344 samples from 18 sources across web text, academic papers, code, mathematics, and books. The downstream evaluation suite comprises: BIG-Bench Hard (BBH) (Suzgun et al., 2022) for reasoning and problem-solving, BoolQ (Clark et al., 2019) for reading comprehension and binary question answering, GSM8K (Cobbe et al., 2021) and MinervaMathQA (Lewkowycz et al., 2022) for mathematical reasoning, IFEval (Zhou et al., 2023) for instruction following, MMLU (Hendrycks et al., 2020) for multitask language understanding, PIQA (Bisk et al., 2020) for physical commonsense reasoning, PubMedQA (Jin et al., 2019) for biomedical question answering, and TruthfulQA (Lin et al., 2021) for measuring truthfulness in model responses.
210
+
211
+ # 5.1.2 Baselines
212
+
213
+ We evaluate DIDS against several domain sampling strategies: Uniform and Random sampling, Doremi (Xie et al., 2024), Velocitune (Luo et al., 2024), Doge (Fan et al., 2024b), and DGA (Fan et al., 2024a). For all baseline implementations, we partition a small subset from the downstream task's validation set to serve as observable samples for domain reweighting. Detailed implementations are provided in Appendix B.
214
+
215
+ # 5.2 Main Results
216
+
217
+ Table 1 presents comprehensive evaluation results across nine downstream tasks under both multitask and single-task optimization scenarios. For reference, we include results from the base Llama-3.1-8B model and its variant trained on the full 929k samples.
218
+
219
+ For multi-task optimization, DIDS with only 100k samples achieves an average score of 62.3, significantly outperforming all baseline methods while surpassing the performance of full data training at 61.2. Specifically, DIDS improves over the strongest baseline Doge by 2.1 on average, with particularly notable gains on mathematical reasoning tasks such as Minerva-MathQA improving by 2.7 points from 17.8 to 20.5. This demonstrates DIDS's effectiveness in identifying and prioritizing the most impactful training samples across diverse downstream tasks. Notably, we observe that for some tasks like MMLU and PIQA where the base model is already approaching saturation, additional training with irrelevant data can be detrimental, as evidenced by the Full Data approach's performance decline from 64.7 to 64.3 on MMLU. Furthermore, given the limited training data budget, unbalanced resource allocation across multiple tasks can lead to improved performance on some tasks at the expense of others, as demonstrated by DGA's poor performance of 42.1 on IFEval.
220
+
221
+ When optimizing for individual tasks, DIDS demonstrates even stronger performance with an average score of 63.7, surpassing the second-best method DGA by 2.1. DIDS shows significant gains on Knowledge-intensive tasks, with IFEval increasing from 53.2 to 57.5 and TruthfulQA improving from 38.5 to 44.8. This indicates that DIDS's FIM-guided domain impact measurement and dynamic sampling strategy are especially effective when focusing on specific downstream objectives. Notably, even with just 100k samples, roughly 10 percent of the full dataset, DIDS achieves higher average performance than training on the full 929k samples with scores of 63.7 versus 61.2.
222
+
223
+ # 5.3 Ablations
224
+
225
+ To analyze the contribution of each component in DIDS, we conduct ablation experiments by progressively removing key components through gradient-based clustering DIDS-GC, FIM-guided impact measurement DIDS-FIM, and loss trajectory consideration DIDS-LT. Results are shown in Table 2.
226
+
227
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Reasoning</td><td colspan="2">Mathematics</td><td>Instruction</td><td colspan="2">Commonsense</td><td>Domain</td><td>Truthfulness</td><td rowspan="2">Average</td></tr><tr><td>BBH</td><td>BoolQ</td><td>GSM8K</td><td>MathQA</td><td>IFEval</td><td>MMLU</td><td>PIQA</td><td>PubMedQA</td><td>TruthfulQA</td></tr><tr><td colspan="11">Multi-task Optimization</td></tr><tr><td>Llama-3.1-8B</td><td>62.5</td><td>81.8</td><td>48.9</td><td>15.7</td><td>18.5</td><td>64.7</td><td>81.1</td><td>75.8</td><td>28.5</td><td>53.1</td></tr><tr><td>+ Full Data (929k)</td><td>68.0</td><td>87.3</td><td>65.2</td><td>16.2</td><td>61.2</td><td>64.3</td><td>81.0</td><td>78.0</td><td>29.5</td><td>61.2</td></tr><tr><td>+ Random (100k)</td><td>67.4</td><td>85.6</td><td>58.9</td><td>11.4</td><td>48.2</td><td>64.0</td><td>82.0</td><td>77.4</td><td>31.5</td><td>58.9</td></tr><tr><td>+ Uniform (100k)</td><td>66.2</td><td>83.2</td><td>57.5</td><td>11.8</td><td>48.2</td><td>64.1</td><td>81.5</td><td>76.0</td><td>31.2</td><td>57.7</td></tr><tr><td>+ Doremi (100k)</td><td>67.5</td><td>85.8</td><td>58.8</td><td>17.5</td><td>49.8</td><td>64.5</td><td>81.9</td><td>77.8</td><td>35.8</td><td>59.9</td></tr><tr><td>+ Velocitune (100k)</td><td>67.2</td><td>85.5</td><td>56.2</td><td>17.2</td><td>49.0</td><td>64.4</td><td>81.7</td><td>77.5</td><td>35.0</td><td>59.3</td></tr><tr><td>+ Doge (100k)</td><td>67.8</td><td>86.0</td><td>57.5</td><td>17.8</td><td>51.2</td><td>64.6</td><td>82.0</td><td>78.5</td><td>37.2</td><td>60.2</td></tr><tr><td>+ DGA (100k)</td><td>67.0</td><td>85.4</td><td>58.8</td><td>18.2</td><td>42.1</td><td>64.8</td><td>81.8</td><td>75.2</td><td>33.4</td><td>58.5</td></tr><tr><td>+ DIDS (100k)</td><td>68.3</td><td>86.9</td><td>59.0</td><td>20.5</td><td>55.6</td><td>64.9</td><td>82.2</td><td>80.4</td><td>43.0</td><td>62.3</td></tr><tr><td colspan="11">Single-task Optimization</td></tr><tr><td>+ Doremi (100k)</td><td>68.8</td><td>86.2</td><td>60.8</td><td>18.2</td><td>51.2</td><td>64.8</td><td>82.6</td><td>78.5</td><td>37.2</td><td>60.9</td></tr><tr><td>+ Velocitune (100k)</td><td>68.0</td><td>86.0</td><td>60.5</td><td>18.0</td><td>50.8</td><td>64.5</td><td>82.0</td><td>78.2</td><td>36.8</td><td>60.5</td></tr><tr><td>+ Doge (100k)</td><td>68.2</td><td>86.8</td><td>60.9</td><td>18.4</td><td>51.5</td><td>64.9</td><td>82.2</td><td>79.0</td><td>37.5</td><td>61.0</td></tr><tr><td>+ DGA (100k)</td><td>68.6</td><td>86.5</td><td>61.8</td><td>19.2</td><td>53.2</td><td>65.2</td><td>82.4</td><td>78.8</td><td>38.5</td><td>61.6</td></tr><tr><td>+ DIDS (100k)</td><td>69.2</td><td>87.5</td><td>63.0</td><td>21.5</td><td>57.5</td><td>65.8</td><td>83.0</td><td>81.2</td><td>44.8</td><td>63.7</td></tr></table>
228
+
229
+ Table 1: The overall performance comparison. Cells with blue background indicate high scores, while red background indicates low scores. The top section shows results when optimizing for multiple downstream tasks simultaneously, while the bottom section shows results when optimizing for individual tasks.
230
+
231
+ <table><tr><td>Variant</td><td>BBH</td><td>MathQA</td><td>IFEval</td><td>TruthfulQA</td><td>Avg</td></tr><tr><td>DIDS (100k)</td><td>68.3</td><td>20.5</td><td>55.6</td><td>43.0</td><td>46.9</td></tr><tr><td>DIDS-GC</td><td>67.7</td><td>19.7</td><td>53.0</td><td>40.1</td><td>45.1</td></tr><tr><td>DIDS-FIM</td><td>67.2</td><td>18.6</td><td>51.9</td><td>38.5</td><td>44.0</td></tr><tr><td>DIDS-LT</td><td>67.5</td><td>19.5</td><td>51.4</td><td>38.1</td><td>44.1</td></tr></table>
232
+
233
+ DIDS-GC replaces gradient-based clustering with BERT semantic clustering, leading to a 1.8-point drop in average performance from 46.9 to 45.1. DIDS-FIM removes the FIM-guided impact measurement, causing a 2.9-point decline to 44.0, most notably affecting TruthfulQA with a 4.5-point drop and IFEval with a 3.7-point decrease. DIDS-LT eliminates the loss trajectory and saturation consideration, resulting in 2.8-point decrease to 44.1, demonstrating that dynamic adaptation to learning progress is crucial for optimal performance. These results show that each component contributes significantly to DIDS effectiveness.
234
+
235
+ # 5.4 Efficiency Analysis
236
+
237
+ To comprehensively evaluate DIDS's computational overhead, we analyze the efficiency of each component: gradient-based clustering, FIM-guided impact measurement, and loss trajectory estimation. Our implementation optimizes computational
238
+
239
+ Table 2: Ablation results. We progressively remove key components: gradient-based clustering (DIDS-GC), FIM-guided impact measurement (DIDS-FIM), and loss trajectory consideration (DIDS-LT).
240
+
241
+ <table><tr><td rowspan="2">Component</td><td colspan="2">TFLOPs</td><td colspan="2">GPU Hours</td></tr><tr><td>DGA</td><td>DIDS</td><td>DGA</td><td>DIDS</td></tr><tr><td>Base Training</td><td>5.47 × 104</td><td>5.47 × 104</td><td>101.6</td><td>101.6</td></tr><tr><td>Cluster (BERT vs. Gradient)</td><td>7.77 × 102</td><td>1.87 × 103</td><td>1.5</td><td>3.3</td></tr><tr><td>Impact (Gradient vs. FIM)</td><td>9.86 × 101</td><td>1.78 × 102</td><td>0.1</td><td>0.2</td></tr><tr><td>Loss Trajectory Consideration</td><td>-</td><td>&lt; 10-1</td><td>-</td><td>&lt; 0.1</td></tr><tr><td>Total</td><td>5.56 × 104</td><td>5.67 × 104</td><td>103.2</td><td>105.2</td></tr></table>
242
+
243
+ Table 3: Computational cost analysis of different components in DIDS. Base training refers to standard training of an 8B parameter model on 1B tokens.
244
+
245
+ costs by retaining gradients only from the final $10\%$ of layers, requiring complete forward passes but partial backward passes. Table 3 presents the computational requirements in terms of TFLOPs and GPU Hours on H800.
246
+
247
+ Base training of an 8B parameter model on 1B tokens requires $5.47 \times 10^{4}$ TFLOPs for forward and backward passes, consuming approximately 101.6 GPU hours. For the clustering component processing 1B tokens, we evaluate two approaches using 500M models. BERT semantic clustering requires only forward passes at $7.77 \times 10^{2}$ TFLOPs, while gradient-based clustering with dimensionality reduction necessitates both forward and partial backward computation at $1.87 \times 10^{3}$ TFLOPs, requiring 1.5 and 3.3 GPU hours respectively.
248
+
249
+ For domain impact measurement using an 8B parameter base model with 25 mixing ratio updates, we compare FIM-guided metrics against gradient alignment. Across 72 training domains, maintain
250
+
251
+ ![](images/7a49c548867c28d668bd1169c2d26b4f44bfe755ddbc8541004da8de5742bf2b.jpg)
252
+ (a) Impact of Update Frequency (b) Impact of Irrelevant Data
253
+
254
+ ![](images/3fdc9a0e74a58f2eb99a1249d7a9ff1514fc5288123824d426ffdaff5c99fb37.jpg)
255
+ Figure 2: Effects of update frequency and irrelevant data.
256
+
257
+ ing running averages of domain-specific gradients incurs negligible overhead. Evaluating 9 downstream tasks with 200 samples per task, gradient alignment requires $9.86 \times 10^{1}$ TFLOPs. DIDS additionally computes FIM diagonal elements, adding negligible overhead at approximately $1.78 \times 10^{2}$ TFLOPs, totaling 0.2 GPU hours. The loss trajectory estimation component introduces minimal computational burden below $10^{-1}$ TFLOPs as it only involves scalar loss value curve fitting. While DIDS introduces roughly $1.9\%$ additional computational cost compared to DGA, this overhead is justified by substantial performance improvements and reduced training data requirements.
258
+
259
+ # 5.5 Parameter Analysis
260
+
261
+ # 5.5.1 Impact of Update Frequency
262
+
263
+ Figure 2a shows how the number of domain sampling probability updates during training affects model performance. When using only 5 updates throughout the entire training process, DIDS achieves an average score of 58.2, which is comparable to the random sampling baseline at 58.9. As we increase the number of updates to 25 and 45, DIDS shows substantial improvements, reaching scores of 60.1 and 61.8 respectively. The performance continues to improve with 65 updates, achieving 62.3, and peaks at 62.4 with 85 updates. However, further increasing to 95 updates leads to a slight performance decline back to 62.3. DGA exhibits a similar trend but with lower overall performance, reaching its peak of 60.1 at 65 updates. Random sampling maintains a constant performance of 58.9 regardless of update frequency, serving as a stable baseline. These results suggest that performing a limited update number during training provides optimal performance for domain sampling strategies.
264
+
265
+ ![](images/538599f14842ce5498f53f6681f3c1c4b98c051f42bd5c5f64f2e40b1d9508ba.jpg)
266
+ (a) Impact of Proxy Model(b) Impact of Domain Count Size
267
+
268
+ ![](images/7606d414c1e06040266ceb098461e22cc008dc4e66b0319bba2e5ba3d05f8dbb.jpg)
269
+ Figure 3: Effects of model size and domain count.
270
+
271
+ # 5.5.2 Impact of Irrelevant Data Ratio
272
+
273
+ To evaluate DIDS's robustness to noise in training data, we introduce varying proportions of irrelevant financial domain data and measure the model performance. As shown in Figure 2b, DIDS demonstrates strong resilience to irrelevant data. Starting at a baseline performance of 62.3 with no irrelevant data, DIDS maintains and even improves its performance as noise increases, reaching a peak of 63.5 at $20\%$ irrelevant data before showing slight decline to 63.1 at $25\%$ . In contrast, both comparison methods exhibit clear degradation with increased noise. DGA's performance drops from 58.5 to 57.1, showing moderate sensitivity to irrelevant data. Random sampling demonstrates the most severe degradation, falling from 58.9 to 54.2. These results highlight DIDS's robust ability to identify and leverage valuable training samples through its FIM-guided impact measurement, even in challenging scenarios with substantial noise in the training dataset.
274
+
275
+ # 5.5.3 Impact of Proxy Model Size
276
+
277
+ We evaluate DIDS using different sizes of proxy models for gradient-based clustering: 500M, 1B, and the full 8B target model. As shown in Figure 3a, the choice of proxy model size has minimal impact on final performance, with average scores of 62.3, 62.4, and 62.5 respectively. This validates our design choice of using a 500M proxy model for clustering, as it provides comparable quality while significantly reducing computational costs.
278
+
279
+ # 5.5.4 Impact of Domain Partition Count
280
+
281
+ We further examine how the granularity of domain partitioning affects model performance. Figure 3b shows that increasing domains from the initial 18 based on data sources leads to substantial early improvements in performance. The average score rises sharply from 61.4 to 62.0 when increasing to 36 domains, followed by moderate gains up to 62.3 with 72 domains. However, further partitioning yields small returns, with performance plateauing
282
+
283
+ around 62.7 even when scaling to 1152 domains. Based on this analysis, we select 72 domains as our default configuration to balance effectiveness and computational efficiency.
284
+
285
+ # 6 Conclusion
286
+
287
+ In this paper, we proposed DIDS, a domain impact-aware data sampling framework for large language model training. To ensure consistent domain partitioning, DIDS groups training samples based on gradient patterns, which leads to more effective sampling decisions. FIM-guided metrics measure domain impact accurately, while dynamic sampling optimization combines impact assessment with learning trajectories. Experiments demonstrated that DIDS achieves superior performance across multiple tasks using only $10\%$ training data.
288
+
289
+ # Limitations
290
+
291
+ Our work has several limitations that should be acknowledged:
292
+
293
+ First, while DIDS demonstrates strong performance with limited training data, the gradient-based domain repartitioning introduces additional computational overhead when processing large-scale datasets. Although we mitigate this through proxy models and dimensionality reduction, the clustering process still requires considerable computational resources when scaling to billions of training samples. Future work could explore more efficient methods for gradient-based domain partitioning to further reduce this overhead while maintaining clustering quality.
294
+
295
+ Second, the effectiveness of our FIM-guided impact measurement depends on the accuracy of the diagonal approximation of the Fisher Information Matrix. While this approximation is computationally necessary, it may not capture all parameter interactions, potentially leading to suboptimal sampling decisions in cases where off-diagonal elements are significant. Additionally, our current approach to loss trajectory modeling assumes exponential decay patterns which may not hold for all learning scenarios.
296
+
297
+ # Ethics Statement
298
+
299
+ While DIDS improves training efficiency through selective sampling, it may inadvertently amplify existing biases in the training data by preferentially selecting certain domains based on their measured impact. This could lead to underrepresentation of
300
+
301
+ minority groups or less common topics in the final model. In future applications, DIDS should be integrated with ethical auditing tools to ensure fairness in the sampling process and maintain model ethics.
302
+
303
+ # Acknowledgments
304
+
305
+ We would like to specially thank the support from the A3 project of the HKUST & MetaX Joint Laboratory. The research work described in this paper was supported by Hong Kong Research Grants Council (grant# 16202722, 16210625, T43-513/23-N, T22-607/24N). It was partially conducted in JC STEM Lab of Data Science Foundations funded by The Hong Kong Jockey Club Charities Trust. We acknowledge the support of Natural Science Foundation of Zhejiang Province under Grant (LY23F020010). This work is supported by the National Natural Science Foundation of China (Grant No.62272334, 6257073827).
306
+
307
+ # References
308
+
309
+ Amos Azaria, Rina Azoulay, and Shulamit Reches. 2024. Chatgpt is a remarkable tool—for experts. Data Intelligence, 6(1):240-296.
310
+ Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language models across training and scaling. In ICML, pages 2397-2430. PMLR.
311
+ Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In AAAI, volume 34, pages 7432-7439.
312
+ Huajun Chen. 2024. Large knowledge model: Perspectives and challenges. Data Intelligence, 6(3):587-620.
313
+ Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv.
314
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv.
315
+ Hexuan Deng, Wenxiang Jiao, Xuebo Liu, Jing Li, Min Zhang, and Zhaopeng Tu. 2024. Drpruning: Efficient large language model pruning through distributionally robust optimization. arXiv preprint arXiv:2411.14055.
316
+
317
+ Simin Fan, David Grangier, and Pierre Ablin. 2024a. Dynamic gradient alignment for online data mixing. arXiv.
318
+ Simin Fan, Matteo Pagliardini, and Martin Jaggi. 2024b. Doge: Domain reweighting with generalization estimation. In ICML.
319
+ Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac'h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2024. A framework for few-shot language model evaluation.
320
+ Yuan Ge, Yilun Liu, Chi Hu, Weibin Meng, Shimin Tao, Xiaofeng Zhao, Hongxia Ma, Li Zhang, Boxing Chen, Hao Yang, et al. 2024. Clustering and ranking: Diversity-preserved instruction selection through expert-aligned quality estimation. arXiv.
321
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv.
322
+ Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. arXiv.
323
+ Feiyang Kang, Yifan Sun, Bingbing Wen, Si Chen, Dawn Song, Rafid Mahmood, and Ruoxi Jia. 2024. Autoscale: Automatic prediction of compute-optimal data composition for training llms. arXiv.
324
+ Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. 2024. T\''ulu 3: Pushing frontiers in open language model post-training. arXiv.
325
+ Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems, 35:3843-3857.
326
+ Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. Truthfulqa: Measuring how models mimic human falsehoods. arXiv.
327
+ Jiaheng Liu, Chenchen Zhang, Jinyang Guo, Yuanxing Zhang, Haoran Que, Ken Deng, Jie Liu, Ge Zhang, Yanan Wu, Congnan Liu, et al. 2024a. Ddk: Distilling domain knowledge for efficient large language models. Advances in Neural Information Processing Systems, 37:98297-98319.
328
+ Qian Liu, Xiaosen Zheng, Niklas Muennighoff, Guangtao Zeng, Longxu Dou, Tianyu Pang, Jing Jiang, and Min Lin. 2024b. Regmix: Data mixture as regression for language model pre-training. arXiv.
329
+
330
+ Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. 2023. What makes good data for alignment? a comprehensive study of automatic data selection in instruction tuning. arXiv.
331
+ Zheheng Luo, Xin Zhang, Xiao Liu, Haoling Li, Yeyun Gong, Chen Qi, and Peng Cheng. 2024. Velocitune: A velocity-based dynamic domain reweighting method for continual pre-training. arXiv.
332
+ Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Anton Belyi, et al. 2025. Mm1: methods, analysis and insights from multimodal llm pre-training. In ECCV, pages 304-323. Springer.
333
+ Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. 2023. Trak: Attributing model behavior at scale. arXiv.
334
+ Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. Advances in Neural Information Processing Systems, 33:19920-19930.
335
+ Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv.
336
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv.
337
+ Maurice Weber, Daniel Fu, Quentin Anthony, Yonatan Oren, Shane Adams, Anton Alexandrov, Xiaozhong Lyu, Huu Nguyen, Xiaozhe Yao, Virginia Adams, et al. 2024. Redpajama: an open dataset for training large language models. arXiv.
338
+ Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. 2024. Less: Selecting influential data for targeted instruction tuning. arXiv.
339
+ Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy S Liang, Quoc V Le, Tengyu Ma, and Adams Wei Yu. 2024. Doremi: Optimizing data mixtures speeds up language model pretraining. NIPS, 36.
340
+ Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy S Liang. 2023. Data selection for language models via importance resampling. Advances in Neural Information Processing Systems, 36:34201-34227.
341
+ Jiasheng Ye, Peiju Liu, Tianxiang Sun, Yunhua Zhou, Jun Zhan, and Xipeng Qiu. 2024. Data mixing laws: Optimizing data mixtures by predicting language modeling performance. arXiv.
342
+
343
+ Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv.
344
+
345
+ Xilin Zhang, Zhixin Mao, Ziwen Chen, and Shen Gao. 2024. Effective tool augmented multi-agent framework for data analysis. Data Intelligence, 6(4):923-945.
346
+
347
+ Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of $100+$ language models. In ACL, Bangkok, Thailand. Association for Computational Linguistics.
348
+
349
+ Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2024. Lima: Less is more for alignment. Advances in Neural Information Processing Systems, 36.
350
+
351
+ Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. 2023. Instruction-following evaluation for large language models. arXiv.
352
+
353
+ # A Johnson-Lindenstrauss Theorem and Random Projection Initialization
354
+
355
+ # A.1 Johnson-Lindenstrauss Lemma
356
+
357
+ The Johnson-Lindenstrauss lemma states that for any set $X$ of $m$ points in $\mathbb{R}^N$ and $0 < \varepsilon < 1$ , there exists a linear map $f:\mathbb{R}^N\to \mathbb{R}^n$ where $n > 8\ln (m) / \varepsilon^2$ such that:
358
+
359
+ $$
360
+ (1 - \varepsilon) \| u - v \| ^ {2} \leq \| f (u) - f (v) \| ^ {2} \leq (1 + \varepsilon) \| u - v \| ^ {2}
361
+ $$
362
+
363
+ where $u, v \in X$ . This theorem guarantees that we can project high-dimensional vectors into a lower-dimensional space while approximately preserving their pairwise distances.
364
+
365
+ # A.2 Gaussian Random Projection
366
+
367
+ For practical implementation, we utilize Gaussian random projection matrices which satisfy the following properties:
368
+
369
+ 1. Spherical symmetry: For any orthogonal matrices $A, B \in O(d)$ , $R A R^{T}$ and $R B R^{T}$ have identical distributions.
370
+ 2. Orthogonality: The rows of $R$ are approximately orthogonal.
371
+ 3. Unit length: Each row of $R$ is normalized to unit length.
372
+
373
+ The projection matrix $R\in \mathbb{R}^{h\times s}$ is constructed as follows:
374
+
375
+ 1. Generate entries $R_{ij}$ independently according to:
376
+
377
+ $$
378
+ R _ {i j} = \left\{ \begin{array}{l l} + 1 / \sqrt {t} & \text {w i t h p r o b a b i l i t y} 1 / 2 \\ - 1 / \sqrt {t} & \text {w i t h p r o b a b i l i t y} 1 / 2 \end{array} \right.
379
+ $$
380
+
381
+ where $t = \Omega (k / \varepsilon^2)$ for dimension reduction parameter $k$ and error tolerance $\varepsilon$ .
382
+
383
+ 2. Normalize each column to ensure unit length: $\tilde{R}_j = R_j / ||R_j||_2$
384
+
385
+ # A.3 Application to Gradient Dimensionality Reduction
386
+
387
+ In the context of gradient-based domain repartitioning, given gradient vectors $g_{i} \in \mathbb{R}^{h}$ , we project them to $\tilde{g}_i \in \mathbb{R}^s$ where $s \ll h$ using:
388
+
389
+ $$
390
+ \tilde {g} _ {i} = R ^ {T} g _ {i}
391
+ $$
392
+
393
+ The choice of target dimension $s$ balances computational efficiency with distance preservation, typically set as:
394
+
395
+ $$
396
+ s = O (\log (m) / \varepsilon^ {2})
397
+ $$
398
+
399
+ where $m$ is the size of gradient vectors and $\varepsilon$ is the desired distance preservation tolerance (typically 0.1-0.3).
400
+
401
+ This projection enables efficient clustering of gradient vectors while maintaining their essential geometric relationships, facilitating meaningful domain repartitioning based on training behavior patterns.
402
+
403
+ # B Implementation Details
404
+
405
+ # B.1 Training Data Distribution
406
+
407
+ The training dataset consists of 939,344 samples from 18 diverse sources, covering domains including mathematics, coding, instruction following, and general dialogue. The dataset is available at https://huggingface.co/datasets/allenai/tulu-3-sft-mixture. The largest components are Tulu 3 Persona MATH with 149,960 samples focusing on mathematical reasoning, followed by FLAN v2 with 89,982 samples of general task instructions, and Evol CodeAlpaca with 107,276 coding-related samples. We provide a detailed breakdown of the dataset composition in Table 4.
408
+
409
+ <table><tr><td>Dataset</td><td>Samples</td><td>Percentage (%)</td></tr><tr><td>Tulu 3 Persona MATH</td><td>149,960</td><td>16.0</td></tr><tr><td>Evol CodeAlpaca</td><td>107,276</td><td>11.4</td></tr><tr><td>FLAN v2</td><td>89,982</td><td>9.6</td></tr><tr><td>NuminaMath-TIR</td><td>64,312</td><td>6.8</td></tr><tr><td>Tulu 3 Persona GSM</td><td>49,980</td><td>5.3</td></tr><tr><td>Tulu 3 WildGuardMix</td><td>50,000</td><td>5.3</td></tr><tr><td>Tulu 3 WildJailbreak</td><td>50,000</td><td>5.3</td></tr><tr><td>Tulu 3 Persona Python</td><td>34,999</td><td>3.7</td></tr><tr><td>Tulu 3 Persona IF</td><td>29,980</td><td>3.2</td></tr><tr><td>Tulu 3 Persona Algebra</td><td>20,000</td><td>2.1</td></tr><tr><td>CoCoNot</td><td>10,983</td><td>1.2</td></tr><tr><td>No Robots</td><td>9,500</td><td>1.0</td></tr><tr><td>OpenAssistant Guanaco</td><td>7,132</td><td>0.8</td></tr><tr><td>TableGPT</td><td>5,000</td><td>0.5</td></tr><tr><td>Tulu 3 Hardcoded</td><td>240</td><td>0.03</td></tr><tr><td>Aya</td><td>100,000</td><td>10.6</td></tr><tr><td>WildChat GPT-4</td><td>100,000</td><td>10.6</td></tr><tr><td>SciRIFF</td><td>10,000</td><td>1.1</td></tr><tr><td>Total</td><td>939,344</td><td>100.0</td></tr></table>
410
+
411
+ Table 4: Distribution of training data across different sources.
412
+
413
+ # B.2 Model Architecture
414
+
415
+ We implement DIDS based on multiple foundation models: Llama-3.1 (8B and 70B variants), Llama2-7B, and Pythia-6.9B. For the proxy model, we utilize Qwen-2.5 (500M) and Llama-3.2 (1B).
416
+
417
+ # B.3 Baseline Description
418
+
419
+ We compare DIDS against the following baseline methods:
420
+
421
+ - Uniform sampling: A basic baseline that assigns equal probabilities to all domains throughout training.
422
+ - Random sampling: Randomly selects domain data at each step without optimization.
423
+ - Doremi (Xie et al., 2024): Trains a proxy model using group distributionally robust optimization to produce offline domain weights for resampling.
424
+ - Velocitune (Luo et al., 2024): Dynamically adjusts domain proportions based on learning velocity guided by scaling laws.
425
+ - Doge (Fan et al., 2024b): Uses bi-level optimization with a proxy model to learn offline domain weights through gradient alignment.
426
+ - DGA (Fan et al., 2024a): Employs online gradient alignment to dynamically estimate optimal pre-training data mixtures.
427
+
428
+ For all baselines, we use identical validation set splits from downstream tasks and tune hyperparameters on a separate development set to ensure fair comparison.
429
+
430
+ To ensure fair comparison across all methods, we adapted the baseline approaches to work with observable downstream tasks. Specifically, DoReMi was originally designed for in-domain scenarios where test sets follow the same distribution as training data. We seamlessly transferred this algorithm to our downstream task settings by computing excess loss over downstream domains. VelociTune was similarly adapted to observe loss over downstream domains for adjusting training data proportions. DOGE and DGA naturally support downstream domain settings as they compute data proportions based on gradient similarity between training and validation sets (observable sets). Importantly, all baseline methods use the same gradient-based domain partitioning strategy as DIDS, ensuring that computational overhead and domain granularity are consistent across comparisons.
431
+
432
+ # B.4 Training Details
433
+
434
+ The training process employs the AdamW optimizer with a learning rate of $5 \times 10^{-4}$ and linear decay scheduling based on Llama-Factory (Zheng et al., 2024) $^{2}$ . We apply gradient clipping at 1.0 and weight decay at 0.1. The model processes sequences with a maximum length of 8, 192 tokens and uses a batch size of 128, distributed across 8 H800 GPUs. For DIDS-specific configurations, we set the domain update interval $\tau = 4,000$ steps and use an EMA coefficient $\beta = 0.1$ . The framework utilizes 72 domains for gradient-based clustering. Our dimensionality reduction approach first retains the top $10\%$ of elements by magnitude before projecting the gradients to 1, 024 dimensions.
435
+
436
+ # B.5 Evaluation Details
437
+
438
+ We conduct evaluations using the lm-eval-harness platform (Gao et al., 2024) $^3$ . All tasks are evaluated under a 3-shot setting using the Vllm backend with chat templates applied. Other parameters follow the platform's default configurations.
439
+
440
+ # C Algorithm Description
441
+
442
+ The Domain Impact-aware Data Sampling (DIDS) algorithm is shown in Algorithm 1, which begins
443
+
444
+ with initialization and domain repartitioning. Starting with uniform sampling probabilities across k domains, the algorithm employs a proxy model $f'$ to compute gradients for each training sample $x_{i}$ . These gradients undergo TopK selection and Johnson-Lindenstrauss random projection for dimensionality reduction before k-means clustering establishes the k domains $\{D_1, \dots, D_k\}$ .
445
+
446
+ At intervals of $\tau$ training steps, DIDS performs domain impact assessment and probability updates. For each domain-task pair $(D_i, S_j)$ , the algorithm calculates gradient differences $\Delta$ and impact scores $I(D_i, S_j)$ using the FIM-guided metric, while simultaneously fitting exponential decay curves to task loss histories to estimate future potential $L_p(S_j)$ and current improvements $\Delta L(S_j)$ . The algorithm then updates sampling probabilities by computing utility scores $U(D_i)$ that combine these impact scores and loss improvements, applying softmax normalization and exponential moving average with coefficient $\beta$ .
447
+
448
+ Between updates, mini-batches are sampled according to current probabilities $\mathbf{p}_t$ , with model parameters updated through standard optimization. This design balances theoretical foundations with practical efficiency through its use of proxy models, strategic gradient processing, and periodic updates, enabling effective domain sampling while maintaining computational feasibility.
449
+
450
+ # D Extended Experimental Results
451
+
452
+ # D.1 Experiments on Additional Models and Datasets
453
+
454
+ To validate the effectiveness of DIDS across different model architectures and datasets, we conducted additional experiments using Mixtral-7B $^{4}$ alongside Llama-3.1-8B, and tested on both Tulu-v3 and the OpenHermes-2.5 $^{5}$ datasets. These comprehensive evaluations strengthen our claims regarding DIDS's broad applicability.
455
+
456
+ # D.1.1 Results on Mixtral-7B with Tulu-v3
457
+
458
+ Table 5 presents the performance of Mixtral-7B when trained on the Tulu-v3 dataset using various sampling strategies. Similar to our findings with Llama-3.1-8B, DIDS demonstrates superior performance across most tasks, achieving an average score of 61.2 in multi-task optimization, which outperforms the full data training (60.4) despite using
459
+
460
+ only $10\%$ of the training examples. Notably, DIDS shows substantial improvements on mathematical reasoning tasks (MathQA: 17.8 vs. 15.8 for DGA) and truthfulness (TruthfulQA: 50.5 vs. 47.2 for Doge).
461
+
462
+ # D.1.2 Results on Llama-3.1-8B with OpenHermes-2.5
463
+
464
+ We further evaluated DIDS using the OpenHermes-2.5 dataset, which offers a different distribution of training data compared to Tulu-v3. Table 6 shows that DIDS consistently outperforms baseline methods across various downstream tasks, achieving an average score of 62.7 in multi-task optimization, which is comparable to training on the full dataset (62.4). In single-task optimization, DIDS achieves even better performance with a score of 64.1, demonstrating the effectiveness of our domain-aware sampling approach on different data distributions.
465
+
466
+ # D.1.3 Results on Mixtral-7B with OpenHermes-2.5
467
+
468
+ To further demonstrate the robustness of our approach across different model-dataset combinations, we evaluated Mixtral-7B on the OpenHermes-2.5 dataset. As shown in Table 7, DIDS continues to outperform baseline methods, achieving an average score of 60.1 in multi-task optimization and 61.3 in single-task optimization. These consistent improvements across different models and datasets strongly support the generalizability of our approach.
469
+
470
+ # D.2 Complete Ablation Study on All Downstream Tasks
471
+
472
+ Table 8 presents a comprehensive ablation study of DIDS across all nine downstream tasks. This expanded analysis provides a more detailed understanding of how each component contributes to the overall performance gains.
473
+
474
+ The ablation results clearly demonstrate the contribution of each component of DIDS. Gradient-based Clustering significantly improves performance, as replacing it with BERT semantic clustering leads to a 1.2-point drop in average performance from 62.3 to 61.1. This highlights the importance of grouping data based on actual training effects rather than semantic similarity alone. The FIM-guided Impact Measurement proves crucial, with its removal resulting in a 2.1-point decline to 60.2. This component shows particularly notable
475
+
476
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Reasoning</td><td colspan="2">Mathematics</td><td>Instruction</td><td colspan="2">Commonsense</td><td colspan="2">Truthfulness</td></tr><tr><td>BBH</td><td>BoolQ</td><td>GSM8K</td><td>MathQA</td><td>IFEval</td><td>MMLU</td><td>PIQA</td><td>PubMedQA</td><td>TruthfulQA</td><td>Average</td></tr><tr><td colspan="11">Multi-task Optimization</td></tr><tr><td>Mixtral-7B</td><td>56.0</td><td>84.7</td><td>36.9</td><td>13.2</td><td>36.6</td><td>61.9</td><td>81.6</td><td>77.8</td><td>41.3</td><td>54.4</td></tr><tr><td>+ Full Data (929k)</td><td>61.0</td><td>87.8</td><td>52.5</td><td>14.0</td><td>63.2</td><td>62.0</td><td>81.8</td><td>79.5</td><td>42.0</td><td>60.4</td></tr><tr><td>+ Random (100k)</td><td>60.2</td><td>86.7</td><td>47.8</td><td>9.5</td><td>55.3</td><td>61.8</td><td>82.2</td><td>78.5</td><td>43.0</td><td>58.3</td></tr><tr><td>+ Uniform (100k)</td><td>59.1</td><td>85.3</td><td>46.5</td><td>9.8</td><td>55.0</td><td>61.5</td><td>81.8</td><td>77.5</td><td>42.8</td><td>57.7</td></tr><tr><td>+ Doremi (100k)</td><td>60.5</td><td>86.5</td><td>48.0</td><td>15.0</td><td>56.5</td><td>62.0</td><td>82.0</td><td>79.0</td><td>46.5</td><td>59.5</td></tr><tr><td>+ Velocitune (100k)</td><td>60.2</td><td>86.2</td><td>46.0</td><td>14.8</td><td>56.0</td><td>61.8</td><td>81.9</td><td>78.8</td><td>45.8</td><td>59.1</td></tr><tr><td>+ Doge (100k)</td><td>60.8</td><td>86.8</td><td>47.0</td><td>15.3</td><td>57.8</td><td>62.1</td><td>82.3</td><td>79.6</td><td>47.2</td><td>59.9</td></tr><tr><td>+ DGA (100k)</td><td>60.0</td><td>86.3</td><td>48.0</td><td>15.8</td><td>53.5</td><td>62.2</td><td>82.0</td><td>77.0</td><td>44.5</td><td>58.8</td></tr><tr><td>+ DIDS (100k)</td><td>61.5</td><td>87.0</td><td>48.5</td><td>17.8</td><td>60.0</td><td>62.4</td><td>82.5</td><td>81.0</td><td>50.5</td><td>61.2</td></tr><tr><td colspan="11">Single-task Optimization</td></tr><tr><td>+ Doremi (100k)</td><td>61.8</td><td>86.8</td><td>50.0</td><td>15.8</td><td>57.5</td><td>62.3</td><td>82.8</td><td>79.8</td><td>47.0</td><td>60.4</td></tr><tr><td>+ Velocitune (100k)</td><td>61.0</td><td>86.5</td><td>49.5</td><td>15.5</td><td>57.0</td><td>62.0</td><td>82.3</td><td>79.5</td><td>46.5</td><td>60.0</td></tr><tr><td>+ Doge (100k)</td><td>61.3</td><td>87.0</td><td>50.2</td><td>16.0</td><td>57.8</td><td>62.4</td><td>82.5</td><td>80.0</td><td>47.5</td><td>60.5</td></tr><tr><td>+ DGA (100k)</td><td>61.7</td><td>86.8</td><td>51.0</td><td>16.7</td><td>58.8</td><td>62.6</td><td>82.6</td><td>79.8</td><td>48.0</td><td>60.9</td></tr><tr><td>+ DIDS (100k)</td><td>62.5</td><td>87.5</td><td>52.0</td><td>18.5</td><td>62.0</td><td>63.0</td><td>83.0</td><td>82.0</td><td>52.0</td><td>62.5</td></tr></table>
477
+
478
+ Table 5: Performance comparison of Mixtral-7B model trained on Tulu-v3 dataset under different sampling strategies.
479
+
480
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Reasoning</td><td colspan="2">Mathematics</td><td>Instruction</td><td colspan="2">Commonsense</td><td colspan="2">Truthfulness</td></tr><tr><td>BBH</td><td>BoolQ</td><td>GSM8K</td><td>MathQA</td><td>IFEval</td><td>MMLU</td><td>PIQA</td><td>PubMedQA</td><td>TruthfulQA</td><td>Average</td></tr><tr><td colspan="11">Multi-task Optimization</td></tr><tr><td>Llama-3.1-8B</td><td>62.5</td><td>81.8</td><td>48.9</td><td>15.7</td><td>18.5</td><td>64.7</td><td>81.1</td><td>75.8</td><td>28.5</td><td>53.1</td></tr><tr><td>+ Full OH-2.5 (1000k)</td><td>67.5</td><td>86.8</td><td>67.0</td><td>17.5</td><td>60.0</td><td>64.5</td><td>81.5</td><td>77.5</td><td>39.5</td><td>62.4</td></tr><tr><td>+ Random (100k)</td><td>66.8</td><td>85.0</td><td>60.2</td><td>12.5</td><td>49.0</td><td>64.2</td><td>82.0</td><td>76.8</td><td>34.0</td><td>59.0</td></tr><tr><td>+ Uniform (100k)</td><td>65.8</td><td>83.5</td><td>59.0</td><td>12.8</td><td>48.8</td><td>64.0</td><td>81.6</td><td>76.2</td><td>33.5</td><td>58.4</td></tr><tr><td>+ Doremi (100k)</td><td>67.2</td><td>85.5</td><td>61.5</td><td>17.0</td><td>50.5</td><td>64.5</td><td>82.0</td><td>77.5</td><td>38.0</td><td>60.4</td></tr><tr><td>+ Velocitune (100k)</td><td>67.0</td><td>85.2</td><td>60.0</td><td>16.8</td><td>50.0</td><td>64.3</td><td>81.8</td><td>77.2</td><td>37.5</td><td>60.0</td></tr><tr><td>+ Doge (100k)</td><td>67.5</td><td>85.8</td><td>61.0</td><td>17.5</td><td>52.0</td><td>64.6</td><td>82.1</td><td>78.0</td><td>39.0</td><td>60.8</td></tr><tr><td>+ DGA (100k)</td><td>66.8</td><td>85.2</td><td>61.8</td><td>17.8</td><td>46.5</td><td>64.7</td><td>81.9</td><td>76.5</td><td>36.5</td><td>59.7</td></tr><tr><td>+ DIDS (100k)</td><td>68.0</td><td>86.5</td><td>62.5</td><td>19.5</td><td>56.0</td><td>64.8</td><td>82.3</td><td>79.5</td><td>45.0</td><td>62.7</td></tr><tr><td colspan="11">Single-task Optimization</td></tr><tr><td>+ Doremi (100k)</td><td>68.5</td><td>86.0</td><td>63.0</td><td>18.0</td><td>52.5</td><td>64.8</td><td>82.5</td><td>78.2</td><td>39.0</td><td>61.4</td></tr><tr><td>+ Velocitune (100k)</td><td>67.8</td><td>85.8</td><td>62.5</td><td>17.8</td><td>52.0</td><td>64.6</td><td>82.2</td><td>78.0</td><td>38.6</td><td>61.0</td></tr><tr><td>+ Doge (100k)</td><td>68.0</td><td>86.5</td><td>63.0</td><td>18.2</td><td>53.0</td><td>64.8</td><td>82.3</td><td>78.8</td><td>39.5</td><td>61.6</td></tr><tr><td>+ DGA (100k)</td><td>68.4</td><td>86.2</td><td>64.0</td><td>19.0</td><td>54.5</td><td>65.0</td><td>82.5</td><td>78.5</td><td>40.5</td><td>62.1</td></tr><tr><td>+ DIDS (100k)</td><td>69.0</td><td>87.2</td><td>65.5</td><td>21.0</td><td>58.5</td><td>65.5</td><td>82.8</td><td>80.5</td><td>46.5</td><td>64.1</td></tr></table>
481
+
482
+ Table 6: Performance comparison of Llama-3.1-8B model trained on OpenHermes-2.5 dataset under different sampling strategies.
483
+
484
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Reasoning</td><td colspan="2">Mathematics</td><td>Instruction</td><td colspan="2">Commonsense</td><td colspan="2">Truthfulness</td></tr><tr><td>BBH</td><td>BoolQ</td><td>GSM8K</td><td>MathQA</td><td>IFEval</td><td>MMLU</td><td>PIQA</td><td>PubMedQA</td><td>TruthfulQA</td><td>Average</td></tr><tr><td colspan="11">Multi-task Optimization</td></tr><tr><td>Mixtral-7B</td><td>56.0</td><td>84.7</td><td>36.9</td><td>13.2</td><td>36.6</td><td>61.9</td><td>81.6</td><td>77.8</td><td>41.3</td><td>54.4</td></tr><tr><td>+ Full OH-2.5 (1000k)</td><td>59.8</td><td>87.9</td><td>64.2</td><td>14.4</td><td>45.8</td><td>62.0</td><td>82.6</td><td>76.6</td><td>50.5</td><td>60.4</td></tr><tr><td>+ Random (100k)</td><td>58.5</td><td>86.5</td><td>54.0</td><td>10.5</td><td>42.0</td><td>61.8</td><td>82.0</td><td>76.0</td><td>46.0</td><td>57.5</td></tr><tr><td>+ Uniform (100k)</td><td>57.8</td><td>86.0</td><td>52.5</td><td>10.8</td><td>41.8</td><td>61.7</td><td>81.8</td><td>75.5</td><td>45.5</td><td>57.0</td></tr><tr><td>+ Doremi (100k)</td><td>59.0</td><td>87.0</td><td>55.5</td><td>13.8</td><td>43.5</td><td>62.0</td><td>82.2</td><td>76.2</td><td>48.0</td><td>58.6</td></tr><tr><td>+ Velocitune (100k)</td><td>58.7</td><td>86.8</td><td>54.5</td><td>13.5</td><td>43.0</td><td>61.9</td><td>82.0</td><td>76.0</td><td>47.5</td><td>58.2</td></tr><tr><td>+ Doge (100k)</td><td>59.2</td><td>87.2</td><td>55.8</td><td>14.0</td><td>44.0</td><td>62.1</td><td>82.3</td><td>76.8</td><td>48.5</td><td>58.9</td></tr><tr><td>+ DGA (100k)</td><td>58.5</td><td>86.8</td><td>56.5</td><td>14.2</td><td>41.0</td><td>62.2</td><td>82.1</td><td>75.8</td><td>47.0</td><td>58.2</td></tr><tr><td>+ DIDS (100k)</td><td>60.0</td><td>87.5</td><td>58.0</td><td>15.8</td><td>45.0</td><td>62.3</td><td>82.5</td><td>77.5</td><td>52.0</td><td>60.1</td></tr><tr><td colspan="11">Single-task Optimization</td></tr><tr><td>+ Doremi (100k)</td><td>60.5</td><td>87.5</td><td>58.0</td><td>14.5</td><td>44.5</td><td>62.2</td><td>82.8</td><td>77.0</td><td>49.0</td><td>59.6</td></tr><tr><td>+ Velocitune (100k)</td><td>60.0</td><td>87.2</td><td>57.5</td><td>14.2</td><td>44.0</td><td>62.0</td><td>82.5</td><td>76.8</td><td>48.5</td><td>59.2</td></tr><tr><td>+ Doge (100k)</td><td>60.2</td><td>87.8</td><td>58.2</td><td>14.8</td><td>44.8</td><td>62.3</td><td>82.7</td><td>77.2</td><td>49.5</td><td>59.7</td></tr><tr><td>+ DGA (100k)</td><td>60.8</td><td>87.5</td><td>59.5</td><td>15.0</td><td>46.0</td><td>62.5</td><td>82.9</td><td>77.0</td><td>50.0</td><td>60.1</td></tr><tr><td>+ DIDS (100k)</td><td>61.5</td><td>88.0</td><td>61.0</td><td>16.5</td><td>47.5</td><td>62.8</td><td>83.0</td><td>78.0</td><td>53.0</td><td>61.3</td></tr></table>
485
+
486
+ Table 7: Performance comparison of Mixtral-7B model trained on OpenHermes-2.5 dataset under different sampling strategies.
487
+
488
+ <table><tr><td rowspan="2">Variant</td><td colspan="3">Reasoning</td><td colspan="2">Mathematics</td><td>Instruction</td><td colspan="2">Commonsense</td><td colspan="2">Truthfulness</td></tr><tr><td>BBH</td><td>BoolQ</td><td>GSM8K</td><td>MathQA</td><td>IFEval</td><td>MMLU</td><td>PIQA</td><td>PubMedQA</td><td>TruthfulQA</td><td>Average</td></tr><tr><td>DIDS (100k)</td><td>68.3</td><td>86.9</td><td>59.0</td><td>20.5</td><td>55.6</td><td>64.9</td><td>82.2</td><td>80.4</td><td>43.0</td><td>62.3</td></tr><tr><td>DIDS-GC</td><td>67.7</td><td>85.7</td><td>58.2</td><td>19.7</td><td>53.0</td><td>64.4</td><td>81.8</td><td>78.9</td><td>40.1</td><td>61.1</td></tr><tr><td>DIDS-FIM</td><td>67.2</td><td>85.0</td><td>57.4</td><td>18.6</td><td>51.9</td><td>64.1</td><td>81.5</td><td>77.2</td><td>38.5</td><td>60.2</td></tr><tr><td>DIDS-LT</td><td>67.5</td><td>85.3</td><td>57.8</td><td>19.5</td><td>51.4</td><td>64.2</td><td>81.6</td><td>77.5</td><td>38.1</td><td>60.3</td></tr></table>
489
+
490
+ Table 8: Comprehensive ablation study of DIDS across all downstream tasks. DIDS-GC replaces gradient-based clustering with BERT semantic clustering, DIDS-FIM removes the FIM-guided impact measurement, and DIDS-LT eliminates the loss trajectory and saturation consideration.
491
+
492
+ benefits for TruthfulQA, PubMedQA, and MathQA tasks, confirming that measuring domain impact through output distributional changes provides a more accurate assessment than gradient similarity alone. Finally, Loss Trajectory Consideration plays a vital role, as its elimination causes a 2.0-point decrease to 60.3, with substantial performance drops in instruction following and truthfulness tasks. This demonstrates the importance of accounting for both current learning progress and future potential when determining sampling probabilities.
493
+
494
+ # D.3 Domain Mixing Analysis
495
+
496
+ Understanding how domain weights evolve during training provides valuable insights into DIDS's operation. Table 9 shows the weight changes for 10 randomly selected domains (out of 256) throughout the training process for both DIDS and DGA.
497
+
498
+ This comparison reveals several key differences between DIDS and DGA. DIDS makes more decisive weight adjustments, with stronger amplification of valuable domains like D023 reaching 2.8 compared to DGA's 1.3, while aggressively reducing less useful domains such as D045 to 0.0 versus DGA's 0.1. This decisive resource allocation contributes significantly to DIDS's superior performance. Furthermore, domains like D078 show non-monotonic weight changes in DIDS, demonstrating its ability to adapt to the dynamic importance of domains during training, in contrast to DGA's more gradual and sometimes inconsistent adjustments. DIDS also tends to converge more quickly to stable domain weights, particularly for highly valuable or less useful domains, enabling more efficient training as the optimal sampling distribution is established earlier. Analysis of domain overlap between DIDS and DGA shows approximately $40\%$ consistency in domain selection, with substantial differences in the remaining $60\%$ , highlighting the distinct impact assessment approaches of the two methods and explaining their performance differences.
499
+
500
+ # D.4 Domain Clustering Insights
501
+
502
+ Our gradient-based domain clustering revealed several interesting patterns in how training data is organized:
503
+
504
+ - Fine-grained Topic Distinction: With sufficiently large cluster counts (over 1,000), DIDS can distinguish between closely related topics. For example, in scientific data, middle
505
+
506
+ school and high school biology knowledge are clustered separately, reflecting their different training effects on the model.
507
+
508
+ - Format Sensitivity: Within the same subject area (e.g., middle school biology), different question formats like multiple-choice and fill-in-the-blank are clustered into separate domains. This suggests that format significantly influences how data affects model learning, beyond just semantic content.
509
+ - Cross-domain Similarity: Some seemingly distinct topics like mathematical proofs and programming implementations are clustered together due to their shared logical reasoning patterns, despite their different semantic categories in traditional domain partitioning.
510
+ - Instruction Pattern Recognition: Data with similar instruction patterns tends to be clustered together regardless of content topic, highlighting the importance of task structure in determining training effects.
511
+
512
+ # E Theoretical Analysis of FIM-guided Impact Measurement
513
+
514
+ The Fisher Information Matrix (FIM) plays a crucial role in DIDS by enabling accurate measurement of how domain-specific parameter updates affect model behavior on downstream tasks. Here, we provide additional theoretical analysis to justify our approach.
515
+
516
+ # E.1 Relationship to Model Uncertainty
517
+
518
+ The FIM is inherently connected to model uncertainty through the Cramér-Rao bound, which establishes that the inverse of FIM provides a lower bound on the covariance of any unbiased estimator of the parameters. In the context of domain impact measurement, this means that parameters with high Fisher Information have a stronger influence on the model's predictive distribution and consequently on task performance.
519
+
520
+ For a parameter set $\theta$ , the Fisher Information Matrix is defined as:
521
+
522
+ $$
523
+ F (\theta) = \mathbb {E} _ {p (x | \theta)} \left[ \nabla_ {\theta} \log p (x | \theta) \nabla_ {\theta} \log p (x | \theta) ^ {T} \right] \tag {17}
524
+ $$
525
+
526
+ When we compute the impact metric between domain $D_{i}$ and task $S_{j}$ as $I(D_{i},S_{j}) = \frac{1}{2}\Delta^{T}F\Delta$ , we are effectively measuring the expected change in
527
+
528
+ <table><tr><td>Domain ID</td><td>Method</td><td>0%</td><td>10%</td><td>20%</td><td>30%</td><td>40%</td><td>50%</td><td>60%</td><td>70%</td><td>80%</td><td>90%</td><td>100%</td></tr><tr><td rowspan="2">D023</td><td>DIDS</td><td>0.4</td><td>0.5</td><td>0.7</td><td>0.9</td><td>1.2</td><td>1.6</td><td>1.9</td><td>2.3</td><td>2.5</td><td>2.7</td><td>2.8</td></tr><tr><td>DGA</td><td>0.4</td><td>0.4</td><td>0.5</td><td>0.6</td><td>0.7</td><td>0.9</td><td>1.0</td><td>1.1</td><td>1.2</td><td>1.3</td><td>1.3</td></tr><tr><td rowspan="2">D045</td><td>DIDS</td><td>0.4</td><td>0.3</td><td>0.2</td><td>0.2</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>DGA</td><td>0.4</td><td>0.3</td><td>0.3</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.1</td></tr><tr><td rowspan="2">D078</td><td>DIDS</td><td>0.4</td><td>0.6</td><td>0.8</td><td>1.0</td><td>1.5</td><td>1.2</td><td>1.2</td><td>1.3</td><td>1.3</td><td>1.4</td><td>1.3</td></tr><tr><td>DGA</td><td>0.4</td><td>0.5</td><td>0.6</td><td>0.7</td><td>0.8</td><td>0.8</td><td>0.9</td><td>0.4</td><td>0.6</td><td>0.3</td><td>0.4</td></tr><tr><td rowspan="2">D102</td><td>DIDS</td><td>0.4</td><td>0.7</td><td>0.9</td><td>1.2</td><td>1.4</td><td>1.5</td><td>1.6</td><td>1.7</td><td>1.7</td><td>1.7</td><td>1.7</td></tr><tr><td>DGA</td><td>0.4</td><td>0.5</td><td>0.6</td><td>0.7</td><td>0.8</td><td>0.8</td><td>0.9</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td></tr><tr><td rowspan="2">D129</td><td>DIDS</td><td>0.4</td><td>0.2</td><td>0.1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>DGA</td><td>0.4</td><td>0.3</td><td>0.2</td><td>0.2</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.0</td><td>0.0</td></tr><tr><td rowspan="2">D147</td><td>DIDS</td><td>0.4</td><td>0.3</td><td>0.2</td><td>0.2</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.0</td><td>0.0</td></tr><tr><td>DGA</td><td>0.4</td><td>0.4</td><td>0.3</td><td>0.3</td><td>0.3</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.1</td></tr><tr><td rowspan="2">D175</td><td>DIDS</td><td>0.4</td><td>0.5</td><td>0.5</td><td>0.6</td><td>0.6</td><td>0.7</td><td>0.7</td><td>0.7</td><td>0.7</td><td>0.8</td><td>0.8</td></tr><tr><td>DGA</td><td>0.4</td><td>0.5</td><td>0.5</td><td>0.5</td><td>0.6</td><td>0.6</td><td>0.6</td><td>0.6</td><td>0.6</td><td>0.6</td><td>0.6</td></tr><tr><td rowspan="2">D198</td><td>DIDS</td><td>0.4</td><td>0.7</td><td>1.0</td><td>1.3</td><td>1.5</td><td>1.7</td><td>1.8</td><td>1.9</td><td>1.9</td><td>1.9</td><td>1.9</td></tr><tr><td>DGA</td><td>0.4</td><td>0.6</td><td>0.7</td><td>0.8</td><td>0.9</td><td>0.9</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td></tr><tr><td rowspan="2">D221</td><td>DIDS</td><td>0.4</td><td>0.2</td><td>0.1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>DGA</td><td>0.4</td><td>0.3</td><td>0.2</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td rowspan="2">D244</td><td>DIDS</td><td>0.4</td><td>0.4</td><td>0.5</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.5</td><td>0.4</td><td>0.4</td><td>0.4</td></tr><tr><td>DGA</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td><td>0.4</td></tr></table>
529
+
530
+ Table 9: Comparison of domain weight evolution between DIDS and DGA across training progress (from 0% to 100% completion).
531
+
532
+ the model's log-likelihood on task $S_{j}$ when updated with domain $D_{i}$ data, weighted by the parameter sensitivity through the FIM.
533
+
534
+ # E.2 Consistency with KL Divergence
535
+
536
+ The KL divergence between two distributions $p(y|\theta_{D_i})$ and $p(y|\theta_{S_j})$ measures how much information is lost when using one distribution to approximate the other. Our use of the second-order Taylor approximation of KL divergence:
537
+
538
+ $$
539
+ \operatorname {K L} \left[ p \left(\theta_ {D _ {i}}\right) \| p \left(\theta_ {S _ {j}}\right) \right] \approx \frac {1}{2} \Delta^ {T} F \Delta \tag {18}
540
+ $$
541
+
542
+ Captures this information loss efficiently and accurately when the parameter updates are relatively small. Furthermore, this approximation has the advantage of being positive definite and symmetric (when properly scaled), which makes it a suitable measure for domain impact.
543
+
544
+ # E.3 Extensions to Alternative Divergences
545
+
546
+ While our implementation focuses on KL divergence, the framework can be extended to other divergence measures such as Wasserstein distance or Jensen-Shannon divergence. The general form would remain similar:
547
+
548
+ $$
549
+ D [ p \left(\theta_ {D _ {i}}\right) \| p \left(\theta_ {S _ {j}}\right) ] \approx \left(\nabla \ell_ {S _ {j}} - \nabla \ell_ {D _ {i}}\right) ^ {T} M \left(\nabla \ell_ {S _ {j}} - \nabla \ell_ {D _ {i}}\right) \tag {19}
550
+ $$
551
+
552
+ where $M$ is a metric tensor appropriate for the chosen divergence. This flexibility allows DIDS to be adapted to different notions of distribution similarity based on specific requirements.
553
+
554
+ # F Practical Guidelines for DIDS Implementation
555
+
556
+ Based on our experiments and analyses, we provide the following practical guidelines for implementing DIDS effectively:
557
+
558
+ - **Domain Count Selection:** Start with a medium number of domains (approximately 50-100) for gradient-based clustering. Increasing the domain count beyond 100 provides diminishing returns in most cases, while increasing computational cost.
559
+ - Update Frequency: Set the domain sampling probability update interval to approximately 5- $10\%$ of the total training steps. More frequent updates can cause instability, while less frequent updates may miss important adaptation opportunities.
560
+
561
+ <table><tr><td>Mixing Strategy</td><td>GSM8K</td><td>HumanEval</td><td>MT-Bench</td></tr><tr><td>Mix[(code,math), 1 general]</td><td>47.53</td><td>14.63</td><td>5.76</td></tr><tr><td>Mix[(code,math), 1/4 general]</td><td>48.44</td><td>15.85</td><td>5.73</td></tr><tr><td>Mix[(code,math), 1/16 general]</td><td>47.99</td><td>15.24</td><td>5.27</td></tr><tr><td>Mix[(code,math), 1/64 general]</td><td>47.23</td><td>14.63</td><td>5.16</td></tr><tr><td>Mix[(code,math), 1/256 general]</td><td>48.52</td><td>16.46</td><td>4.69</td></tr><tr><td>Mix[1(code,math), general]</td><td>47.53</td><td>14.63</td><td>5.76</td></tr><tr><td>Mix[1/4(code,math), general]</td><td>41.31</td><td>10.97</td><td>5.81</td></tr><tr><td>Mix[1/16(code,math), general]</td><td>33.20</td><td>11.58</td><td>5.76</td></tr><tr><td>Mix[1/64(code,math), general]</td><td>25.17</td><td>12.19</td><td>5.84</td></tr><tr><td>Mix[1/256(code,math), general]</td><td>16.52</td><td>9.14</td><td>5.82</td></tr><tr><td>DIDS (dynamic)</td><td>52.21</td><td>18.05</td><td>5.88</td></tr></table>
562
+
563
+ Table 10: Performance with different static mixing ratios between specialized and general domains.
564
+
565
+ - EMA Coefficient Tuning: Use an EMA coefficient $(\beta)$ of 0.1-0.3 for stability. Lower values allow for more rapid adaptation, which is beneficial in early training stages, while higher values provide stability in later stages.
566
+ - Proxy Model Selection: A proxy model with $5 - 10\%$ the size of the target model typically provides a good balance between computational efficiency and gradient similarity. The proxy model should maintain the same architecture family as the target model for best results.
567
+ - Downstream Task Selection: Include a diverse set of downstream tasks in the observation set, covering different capability areas like reasoning, knowledge, instruction following, etc. This diversity ensures balanced optimization across different aspects of model performance.
568
+ - FIM Computation Efficiency: Compute the diagonal FIM approximation using a small batch size (16-32) for efficiency without significant loss in accuracy. The FIM computation only needs to be performed during domain sampling probability updates.
569
+
570
+ # G Mixed Ratio Analysis
571
+
572
+ To further validate the effectiveness of different domain mixing strategies, we conducted a grid search analysis similar to that reported in previous work (Zhang et al., 2022; McKinzie et al., 2025). Table 10 presents results for different mixing ratios of code/math domains versus general domains.
573
+
574
+ This analysis demonstrates several key points. First, DIDS's dynamic approach outperforms all static mixing ratios across all three tasks, highlighting the limitations of fixed domain proportions
575
+
576
+ throughout training. Second, different tasks show different optimal static mixing ratios - GSM8K and HumanEval benefit from higher proportions of code and math content, while MT-Bench performs better with more balanced or general-leaning distributions. Third, increasing the proportion of specialized domains like code and math significantly improves performance on related tasks such as GSM8K and HumanEval but can negatively impact general capabilities measured by MT-Bench. DIDS effectively navigates these trade-offs through dynamic adaptation.
577
+
578
+ These results align with findings from industry practices in models like MM1 (McKinzie et al., 2025) and Llama3 (Touvron et al., 2023), where mixed ratios are carefully tuned through extensive grid search. DIDS automates this process and improves upon static optimal ratios through dynamic adaptation.
579
+
580
+ # H Domain Partitioning Robustness Analysis
581
+
582
+ To evaluate DIDS's sensitivity to domain partitioning quality, we conducted destructive experiments by artificially corrupting the domain structure. Starting with our standard 72-domain gradient-based partitioning, we randomly swapped half of each domain's data with samples from other domains, creating mixed domains that violate intradomain consistency assumptions.
583
+
584
+ Table 11 demonstrates that while unreasonable partitioning degrades DIDS performance by 3.5 points (from 63.7 to 60.2), it still outperforms random sampling by 1.3 points. This indicates that DIDS exhibits graceful degradation and maintains effectiveness even when domain assumptions are violated, highlighting the robustness of our FIM-guided impact measurement and dynamic sampling components beyond perfect domain organization.
585
+
586
+ # I Detailed Computational Cost Analysis
587
+
588
+ We provide a comprehensive breakdown of computational costs across all baseline methods to demonstrate DIDS's efficiency advantages.
589
+
590
+ DoReMi and DoGE employ offline reweighting strategies using 280M parameter proxy models. DoReMi requires: (1) reference model training with uniform sampling $(2.05 \times 10^{3}$ TFLOPs, 3.8 GPU hours), (2) proxy model training with Group DRO $(2.05 \times 10^{3}$ TFLOPs, 3.8 GPU hours), and (3) excess loss computation (0.37 TFLOPs, 0.004
591
+
592
+ <table><tr><td>Method</td><td>BBH</td><td>BoolQ</td><td>GSM8K</td><td>MathQA</td><td>IFEval</td><td>MMLU</td><td>PIQA</td><td>PubMedQA</td><td>TruthfulQA</td><td>Avg</td></tr><tr><td>Random</td><td>67.4</td><td>85.6</td><td>58.9</td><td>11.4</td><td>48.2</td><td>64.0</td><td>82.0</td><td>77.4</td><td>31.5</td><td>58.9</td></tr><tr><td>DIDS (Original)</td><td>69.2</td><td>87.5</td><td>63.0</td><td>21.5</td><td>57.5</td><td>65.8</td><td>83.0</td><td>81.2</td><td>44.8</td><td>63.7</td></tr><tr><td>DIDS (Unreasonable)</td><td>67.4</td><td>86.1</td><td>60.6</td><td>14.5</td><td>50.6</td><td>65.1</td><td>82.6</td><td>79.6</td><td>35.4</td><td>60.2</td></tr></table>
593
+
594
+ Table 11: Robustness analysis under corrupted domain partitioning on Llama-3.1-8B.
595
+
596
+ GPU hours). DoGE follows similar complexity but uses gradient similarity calculations instead of excess loss computation.
597
+
598
+ Velocitune employs two phases: (1) target estimation using full 8B models on $51\%$ subsampled data $(2.79\times 10^{4}$ TFLOPs, 51.8 GPU hours), and (2) velocity-guided training with periodic updates every 150 steps $(1.2\times 10^{2}$ TFLOPs, 0.2 GPU hours). The method requires significantly more resources due to full-size model training for target estimation.
599
+
600
+ DGA uses online gradient alignment with minimal overhead for maintaining running averages of domain-specific gradients.
601
+
602
+ Table 12 demonstrates that DIDS achieves superior performance improvements while maintaining competitive computational efficiency. Our method requires only $3.7\%$ additional overhead, significantly lower than Velocitune's $51.2\%$ and comparable to efficient methods like DGA, while providing substantial performance gains.
603
+
604
+ # J Out-of-Distribution Generalization Analysis
605
+
606
+ For potential overfitting to specific downstream tasks, we evaluate DIDS's generalization capability on unseen out-of-distribution (OOD) tasks across two experimental setups.
607
+
608
+ For the diverse downstream task setup, we used our standard 9-task evaluation suite (BBH, BoolQ, GSM8K, MathQA, IFEval, MMLU, PIQA, PubMedQA, TruthfulQA) during training, then evaluated on four unseen OOD tasks: WMT16 English-German translation (BLEU), TLDR summarization (Win Rate vs. Llama-3.1-8B), ARC-Challenge science questions (ACC), and MBPP code generation (Pass@1).
609
+
610
+ For the single task setup, we also tested the extreme case where only one downstream task guides training on MathQA and TruthfulQA.
611
+
612
+ The results in Table 13 demonstrate that DIDS maintains competitive OOD performance, especially when optimized for diverse downstream objectives, achieving the highest average performance (53.5) across all OOD evaluation tasks. As
613
+
614
+ shown in Table 14, when using single downstream tasks, DIDS shows meaningful cross-task transfer—mathematical reasoning benefits scientific reasoning (ARC: 86.4) and code generation (MBPP: 60.6), while truthfulness training improves summarization (TLDR: 54.1). While domain-specific optimization can lead to some specialization effects in unrelated areas, DIDS's FIM-guided approach captures meaningful cross-task dependencies and maintains reasonable generalization capabilities.
615
+
616
+ <table><tr><td>Method</td><td>Total TFLOPs</td><td>Overhead</td><td>Total GPU Hours</td></tr><tr><td>Base Training</td><td>5.47 × 104</td><td>-</td><td>101.6</td></tr><tr><td>DGA</td><td>5.56 × 104</td><td>1.6%</td><td>103.2</td></tr><tr><td>DIDS</td><td>5.67 × 104</td><td>3.7%</td><td>105.2</td></tr><tr><td>DoReMi</td><td>5.88 × 104</td><td>7.5%</td><td>109.2</td></tr><tr><td>DoGE</td><td>5.88 × 104</td><td>7.5%</td><td>109.2</td></tr><tr><td>Velocitune</td><td>8.27 × 104</td><td>51.2%</td><td>153.6</td></tr></table>
617
+
618
+ Table 12: Comprehensive computational cost comparison across all methods.
619
+
620
+ <table><tr><td>Method</td><td>WMT16 EN-DE</td><td>TLDR</td><td>ARC-Challenge</td><td>MBPP</td><td>Average</td></tr><tr><td>Llama-3.1-8B (no training)</td><td>17.1</td><td>50.1</td><td>76.4</td><td>55.6</td><td>49.8</td></tr><tr><td>Random (100k)</td><td>16.8</td><td>47.5</td><td>82.7</td><td>57.3</td><td>51.0</td></tr><tr><td>DGA (100k)</td><td>17.3</td><td>49.8</td><td>83.6</td><td>59.7</td><td>52.6</td></tr><tr><td>DIDS (100k)</td><td>17.2</td><td>50.0</td><td>85.7</td><td>61.3</td><td>53.5</td></tr></table>
621
+
622
+ Table 13: OOD generalization performance with diverse downstream task training.
623
+
624
+ <table><tr><td rowspan="2">Method</td><td colspan="4">Training Target: MathQA</td><td colspan="4">Training Target: TruthfulQA</td></tr><tr><td>WMT16</td><td>TLDR</td><td>ARC</td><td>MBPP</td><td>WMT16</td><td>TLDR</td><td>ARC</td><td>MBPP</td></tr><tr><td>Llama-3.1-8B (no training)</td><td>17.1</td><td>50.1</td><td>76.4</td><td>55.6</td><td>17.1</td><td>50.1</td><td>76.4</td><td>55.6</td></tr><tr><td>Random</td><td>16.8</td><td>47.5</td><td>82.7</td><td>57.3</td><td>16.8</td><td>47.5</td><td>82.7</td><td>57.3</td></tr><tr><td>DGA</td><td>15.9</td><td>45.3</td><td>83.7</td><td>59.2</td><td>17.8</td><td>48.9</td><td>81.4</td><td>56.8</td></tr><tr><td>DIDS</td><td>15.9</td><td>45.8</td><td>86.4</td><td>60.6</td><td>17.6</td><td>54.1</td><td>82.3</td><td>56.6</td></tr></table>
625
+
626
+ Table 14: OOD generalization under single-task optimization scenarios.
627
+
628
+ Algorithm 1: Domain Impact-aware Data Sampling (DIDS)
629
+ Input: Training dataset $\mathcal{D}$ ;Downstream tasks $S$ ;Proxy model $f^{\prime}$ Number of domains $k$ ; Update interval $\tau$ ;EMA coefficient $\beta$ ; Training steps $T$
630
+ Output: Domain sampling probabilities $\mathbf{p}_t$
631
+ 1 // Initialize sampling probabilities uniformly;
632
+ 2 $\mathbf{p}_0\gets [1 / k,\dots ,1 / k]$ .
633
+ 3 // Domain repartition based on gradients;
634
+ 4 $G\gets \emptyset$ .
635
+ 5 foreach $x_{i}\in \mathcal{D}$ do
636
+ 6 $\begin{array}{rl} & g_i\leftarrow \nabla \ell (f',x_i)\end{array}$
637
+ 7 $\begin{array}{rl} & g_i\leftarrow \mathrm{TopK}(g_i)\end{array}$
638
+ 8 $\tilde{g}_i\gets R^T g_i$
639
+ 9 $G\gets G\cup \{\tilde{g}_i\}$
640
+ 10 end
641
+ 11 $\{D_1,\ldots ,D_k\} \gets \mathrm{KMeans}(G,k)$
642
+ 12 for $t\gets 1$ to $T$ do
643
+ 13 if $t$ mod $\tau = 0$ then
644
+ 14 // Compute domain impact matrix;
645
+ 15 foreach $D_{i}\in \{D_{1},\dots,D_{k}\}$ do
646
+ 16 foreach $S_{j}\in \{S_{1},\dots,S_{m}\}$ do
647
+ 17 $\Delta \leftarrow \nabla \ell_{S_j} - \nabla \ell_{D_i};$
648
+ 18 F $\leftarrow \mathbb{E}[\nabla \log p(\theta)\odot \nabla \log p(\theta)]$ .
649
+ 19 I(Di, $S_{j})\gets \frac{1}{2}\Delta^{T}F\Delta$
650
+ 20 end
651
+ 21 end
652
+ 22 // Compute future potential;
653
+ 23 foreach $S_{j}\in \{S_{1},\dots,S_{m}\}$ do
654
+ 24 Fit $L(t) = ae^{-bt} + c$ using loss history $\{L_1(S_j),\dots,L_t(S_j)\}$ .
655
+ 25 $L_{p}(S_{j})\leftarrow L_{t}(S_{j}) - L(t + \tau);$
656
+ 26 $\Delta L(S_j)\leftarrow L_{t - 1}(S_j) - L_t(S_j)$
657
+ 27 end
658
+ 28 // Update sampling probabilities;
659
+ 29 foreach $D_{i}\in \{D_{1},\dots,D_{k}\}$ do
660
+ 30 U(Di) $\leftarrow \sum_{j}\frac{I(D_i,S_j)\cdot(\Delta L(S_j) + L_p(S_j))}{pt - 1,i}$
661
+ 31 $\hat{p}_{t,i}\gets \mathrm{softmax}(U(D_i));$
662
+ 32 $p_{t,i}\gets \beta p_{t - 1,i} + (1 - \beta)\hat{p}_{t,i};$
663
+ 33 end
664
+ 34 Pt/∑ipt,i;
665
+ 35 end
666
+ 36 Sample batch $B_{t}$ according to pt;
667
+ 37 Update model parameters $\theta$ using $B_{t}$ .
668
+ 38 end
669
+ 39 return pt
2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9aae37976a7827b2142cc2f0776c84517631f4754a8a230a7659a311e4264cc7
3
+ size 1199646
2025/DIDS_ Domain Impact-aware Data Sampling for Large Language Model Training/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DINT Transformer/0d93eff8-c6dd-408c-9936-774d22532538_content_list.json ADDED
@@ -0,0 +1,1505 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "DINT Transformer",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 396,
8
+ 86,
9
+ 601,
10
+ 104
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Yueyang Cang*",
17
+ "bbox": [
18
+ 179,
19
+ 118,
20
+ 312,
21
+ 135
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Yuhang Liu*",
28
+ "bbox": [
29
+ 440,
30
+ 118,
31
+ 552,
32
+ 135
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Xiaoteng Zhang",
39
+ "bbox": [
40
+ 678,
41
+ 118,
42
+ 821,
43
+ 136
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Tsinghua University / Beijing",
50
+ "bbox": [
51
+ 374,
52
+ 136,
53
+ 621,
54
+ 152
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "{cangyy23,yh-liu23,zxt21}@mails.tsinghua.edu.cn",
61
+ "bbox": [
62
+ 260,
63
+ 153,
64
+ 736,
65
+ 168
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "Erlu Zhao",
72
+ "text_level": 1,
73
+ "bbox": [
74
+ 284,
75
+ 187,
76
+ 378,
77
+ 200
78
+ ],
79
+ "page_idx": 0
80
+ },
81
+ {
82
+ "type": "text",
83
+ "text": "Peking University Health Science Center / BeiJing Tsinghua University / BeiJing",
84
+ "bbox": [
85
+ 124,
86
+ 204,
87
+ 786,
88
+ 219
89
+ ],
90
+ "page_idx": 0
91
+ },
92
+ {
93
+ "type": "text",
94
+ "text": "2411210080@bjmu.edu.cn",
95
+ "bbox": [
96
+ 220,
97
+ 221,
98
+ 443,
99
+ 236
100
+ ],
101
+ "page_idx": 0
102
+ },
103
+ {
104
+ "type": "text",
105
+ "text": "zzushi@126.com",
106
+ "bbox": [
107
+ 593,
108
+ 221,
109
+ 739,
110
+ 235
111
+ ],
112
+ "page_idx": 0
113
+ },
114
+ {
115
+ "type": "text",
116
+ "text": "Abstract",
117
+ "text_level": 1,
118
+ "bbox": [
119
+ 260,
120
+ 260,
121
+ 339,
122
+ 275
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "text",
128
+ "text": "The DIFF Transformer mitigates interference from irrelevant contexts by introducing a differential attention mechanism, thereby enhancing focus on critical tokens. However, this architecture suffers from two major limitations: first, its use of two independent attention matrices leads to numerical instability, and second, it lacks global context modeling, which is essential for identifying globally significant tokens. To address these challenges, we propose the DINT Transformer, which extends the DIFF Transformer by incorporating an integral mechanism. By computing global importance scores and integrating them into the attention matrix, the DINT Transformer not only improves overall numerical stability but also significantly enhances its ability to capture global dependencies. Experimental results demonstrate that the DINT Transformer achieves superior accuracy and robustness across various practical applications, including long-context language modeling and key information retrieval. These advancements establish the DINT Transformer as a highly effective and promising architecture.",
129
+ "bbox": [
130
+ 141,
131
+ 286,
132
+ 460,
133
+ 627
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "text",
139
+ "text": "1 Introduction",
140
+ "text_level": 1,
141
+ "bbox": [
142
+ 114,
143
+ 650,
144
+ 258,
145
+ 665
146
+ ],
147
+ "page_idx": 0
148
+ },
149
+ {
150
+ "type": "text",
151
+ "text": "Transformer(Vaswani, 2017), as one of the most popular models in the field of artificial intelligence today, is widely used in natural language processing, computer vision, and other fields, especially with the application of decoder-only architectures in large language models (LLMs). Its core lies in the attention mechanism based on softmax, which assigns importance to different tokens in a sequence. However, recent research(Lu et al., 2021) has found that LLMs face the challenge of attention noise when accurately focusing on key information in the context.",
152
+ "bbox": [
153
+ 112,
154
+ 675,
155
+ 489,
156
+ 866
157
+ ],
158
+ "page_idx": 0
159
+ },
160
+ {
161
+ "type": "text",
162
+ "text": "To address the issue of attention noise, DIFF Transformer(Ye et al., 2024) introduces a differen",
163
+ "bbox": [
164
+ 112,
165
+ 869,
166
+ 489,
167
+ 900
168
+ ],
169
+ "page_idx": 0
170
+ },
171
+ {
172
+ "type": "text",
173
+ "text": "tial attention mechanism that effectively suppresses the impact of irrelevant context by computing DIFFerence between two independent attention distributions. However, DIFF Transformer still exhibits significant limitations: The use of two independent attention matrices makes it difficult to accurately estimate weights for noisy components, resulting in numerical instability that may adversely affect downstream task performance.",
174
+ "bbox": [
175
+ 507,
176
+ 261,
177
+ 884,
178
+ 405
179
+ ],
180
+ "page_idx": 0
181
+ },
182
+ {
183
+ "type": "text",
184
+ "text": "Through our research, we observed that the semantic interpretation of most tokens in a sequence often depends on a few globally critical tokens. Taking sentence processing as an example, key elements such as subjects or main predicate verbs frequently serve as semantic anchors (as illustrated in Figure 1), playing a decisive role in constructing overall meaning. Building on this insight, we developed DINT Transformer by introducing an integral mechanism to extend DIFF Transformer. This integral component computes global importance scores, enabling the model to enhance its focus on critical tokens. Our proposed DINT Transformer not only further reduces attention noise by strengthening the focus on globally important tokens, but also significantly decreases the frequency of negative values in attention matrices through parametric design, thereby improving the model's overall numerical stability and substantially boosting performance.",
185
+ "bbox": [
186
+ 507,
187
+ 406,
188
+ 884,
189
+ 727
190
+ ],
191
+ "page_idx": 0
192
+ },
193
+ {
194
+ "type": "text",
195
+ "text": "Through comprehensive experiments on long-context language modeling and key information retrieval tasks, we rigorously validated the efficacy of DINT Transformer. The results demonstrate that DINT Transformer consistently outperforms both conventional Transformer and DIFF Transformer across all tasks. Its integral mechanism not only effectively captures global dependencies and further suppresses attention noise, but also significantly enhances model stability, successfully addressing inherent limitations of existing approaches. Moreover, while maintaining excellent scalability, DINT",
196
+ "bbox": [
197
+ 507,
198
+ 728,
199
+ 884,
200
+ 920
201
+ ],
202
+ "page_idx": 0
203
+ },
204
+ {
205
+ "type": "page_footnote",
206
+ "text": "*Equal contribution.",
207
+ "bbox": [
208
+ 134,
209
+ 906,
210
+ 262,
211
+ 920
212
+ ],
213
+ "page_idx": 0
214
+ },
215
+ {
216
+ "type": "page_number",
217
+ "text": "9813",
218
+ "bbox": [
219
+ 478,
220
+ 927,
221
+ 519,
222
+ 940
223
+ ],
224
+ "page_idx": 0
225
+ },
226
+ {
227
+ "type": "footer",
228
+ "text": "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 9813-9821 November 4-9, 2025 ©2025 Association for Computational Linguistics",
229
+ "bbox": [
230
+ 159,
231
+ 945,
232
+ 835,
233
+ 972
234
+ ],
235
+ "page_idx": 0
236
+ },
237
+ {
238
+ "type": "image",
239
+ "img_path": "images/c5832b5874aae07cfe0d6fc9ccf2edcaf9db53c18224229424ae921d100551d7.jpg",
240
+ "image_caption": [
241
+ "Einstein was born in Germany and was a physicist, mathematician, and philosopher.",
242
+ "Figure 1: The DIFF Transformer's use of two independent attention matrices results in a significant proportion of negative values in its final attention scores. In contrast, the DINT Transformer substantially reduces the occurrence of negative values, enhances numerical stability, and more effectively strengthens attention to globally important tokens—such as precisely focusing on key entities like \"Newton\" within sentences."
243
+ ],
244
+ "image_footnote": [],
245
+ "bbox": [
246
+ 127,
247
+ 123,
248
+ 853,
249
+ 307
250
+ ],
251
+ "page_idx": 1
252
+ },
253
+ {
254
+ "type": "text",
255
+ "text": "Transformer delivers substantial performance improvements in downstream tasks such as key information retrieval. These significant findings establish DINT Transformer as a robust foundational architecture for future advancements in sequence modeling and large language models.",
256
+ "bbox": [
257
+ 112,
258
+ 404,
259
+ 489,
260
+ 501
261
+ ],
262
+ "page_idx": 1
263
+ },
264
+ {
265
+ "type": "text",
266
+ "text": "2 DINT Transformer",
267
+ "text_level": 1,
268
+ "bbox": [
269
+ 112,
270
+ 511,
271
+ 315,
272
+ 526
273
+ ],
274
+ "page_idx": 1
275
+ },
276
+ {
277
+ "type": "text",
278
+ "text": "DINT Transformer is designed as a robust architecture for sequence modeling, particularly for large language models (LLMs). The model consists of $L$ stacked layers, where each layer applies a DINT attention module followed by a feedforward network. Starting from token embeddings $X_0 \\in \\mathbb{R}^{N \\times d_{\\mathrm{model}}}$ , the input is progressively transformed through $L$ layers to produce the final output $X_L$ . The key innovation lies in the addition of an integral mechanism within the attention module, which enables effective modeling of global dependencies while preserving numerical stability. The overall structure aligns with common practices, incorporating pre-RMSNorm(Zhang and Sennrich, 2019) and SwiGLU(Ramachandran et al., 2017; Shazeer, 2020) for enhanced performance following LLaMA(Touvron et al., 2023). A diagram of the model architecture is shown in Figure 2.",
279
+ "bbox": [
280
+ 112,
281
+ 536,
282
+ 489,
283
+ 826
284
+ ],
285
+ "page_idx": 1
286
+ },
287
+ {
288
+ "type": "text",
289
+ "text": "2.1 DIFF Attention",
290
+ "text_level": 1,
291
+ "bbox": [
292
+ 112,
293
+ 835,
294
+ 284,
295
+ 850
296
+ ],
297
+ "page_idx": 1
298
+ },
299
+ {
300
+ "type": "text",
301
+ "text": "DIFF attention introduces a differential attention mechanism that reduces attention noise by leveraging the difference between two attention distributions. Specifically, given the input $X \\in \\mathbb{R}^{N \\times d_{\\mathrm{model}}}$ ,",
302
+ "bbox": [
303
+ 112,
304
+ 856,
305
+ 489,
306
+ 921
307
+ ],
308
+ "page_idx": 1
309
+ },
310
+ {
311
+ "type": "text",
312
+ "text": "it is projected to query, key, and value matrices:",
313
+ "bbox": [
314
+ 507,
315
+ 404,
316
+ 863,
317
+ 420
318
+ ],
319
+ "page_idx": 1
320
+ },
321
+ {
322
+ "type": "equation",
323
+ "text": "\n$$\n[ Q _ {1}; Q _ {2} ] = X W _ {Q}, \\quad [ K _ {1}; K _ {2} ] = X W _ {K}, \\quad V = X W _ {V}, \\tag {1}\n$$\n",
324
+ "text_format": "latex",
325
+ "bbox": [
326
+ 515,
327
+ 430,
328
+ 882,
329
+ 456
330
+ ],
331
+ "page_idx": 1
332
+ },
333
+ {
334
+ "type": "text",
335
+ "text": "where $Q_{1}, Q_{2}, K_{1}, K_{2} \\in \\mathbb{R}^{N \\times d}$ and $V \\in \\mathbb{R}^{N \\times 2d}$ are the projected matrices, and $W_{Q}, W_{K}, W_{V} \\in \\mathbb{R}^{d_{\\mathrm{model}} \\times 2d}$ are learnable parameters. The differential attention operator computes the output as:",
336
+ "bbox": [
337
+ 507,
338
+ 456,
339
+ 884,
340
+ 521
341
+ ],
342
+ "page_idx": 1
343
+ },
344
+ {
345
+ "type": "equation",
346
+ "text": "\n$$\n\\operatorname {D i f f A t t n} (X) = \\left(\\operatorname {s o f t m a x} \\left(\\frac {Q _ {1} K _ {1} ^ {\\top}}{\\sqrt {d}}\\right) - \\lambda \\cdot \\operatorname {s o f t m a x} \\left(\\frac {Q _ {2} K _ {2} ^ {\\top}}{\\sqrt {d}}\\right)\\right) V \\tag {2}\n$$\n",
347
+ "text_format": "latex",
348
+ "bbox": [
349
+ 526,
350
+ 532,
351
+ 882,
352
+ 570
353
+ ],
354
+ "page_idx": 1
355
+ },
356
+ {
357
+ "type": "text",
358
+ "text": "where $\\lambda$ is a learnable scalar parameter. This differential mechanism effectively suppresses irrelevant context, enhancing the robustness of the attention scores by canceling common-mode noise, analogous to the operation of differential amplifiers in electrical engineering. To synchronize learning dynamics, $\\lambda$ is re-parameterized as:",
359
+ "bbox": [
360
+ 507,
361
+ 571,
362
+ 882,
363
+ 684
364
+ ],
365
+ "page_idx": 1
366
+ },
367
+ {
368
+ "type": "equation",
369
+ "text": "\n$$\n\\lambda = \\exp \\left(\\lambda_ {q 1} \\cdot \\lambda_ {k 1}\\right) - \\exp \\left(\\lambda_ {q 2} \\cdot \\lambda_ {k 2}\\right) + \\lambda_ {\\text {i n i t}}, \\tag {3}\n$$\n",
370
+ "text_format": "latex",
371
+ "bbox": [
372
+ 517,
373
+ 697,
374
+ 882,
375
+ 714
376
+ ],
377
+ "page_idx": 1
378
+ },
379
+ {
380
+ "type": "text",
381
+ "text": "where $\\lambda_{q1},\\lambda_{k1},\\lambda_{q2},\\lambda_{k2}\\in \\mathbb{R}^d$ are learnable vectors, and $\\lambda_{\\mathrm{init}}\\in (0,1)$ is a constant used for initialization. Empirical results show that setting $\\lambda_{\\mathrm{init}} = 0.8 - 0.6\\times \\exp (-0.3\\cdot (l - 1))$ , where $l\\in [1,L]$ represents the layer index, works well in practice.",
382
+ "bbox": [
383
+ 507,
384
+ 727,
385
+ 882,
386
+ 824
387
+ ],
388
+ "page_idx": 1
389
+ },
390
+ {
391
+ "type": "text",
392
+ "text": "2.2 DINT Attention",
393
+ "text_level": 1,
394
+ "bbox": [
395
+ 507,
396
+ 835,
397
+ 680,
398
+ 848
399
+ ],
400
+ "page_idx": 1
401
+ },
402
+ {
403
+ "type": "text",
404
+ "text": "DINT attention extends DIFF attention by introducing an integral mechanism, enhancing the model's ability to capture globally important information while maintaining numerical stability through row",
405
+ "bbox": [
406
+ 507,
407
+ 856,
408
+ 882,
409
+ 921
410
+ ],
411
+ "page_idx": 1
412
+ },
413
+ {
414
+ "type": "page_number",
415
+ "text": "9814",
416
+ "bbox": [
417
+ 480,
418
+ 927,
419
+ 519,
420
+ 940
421
+ ],
422
+ "page_idx": 1
423
+ },
424
+ {
425
+ "type": "image",
426
+ "img_path": "images/31ad5755d7c15747522595c809d40b7a5a348b94dbbb6239259f152dd0b25d9d.jpg",
427
+ "image_caption": [
428
+ "Figure 2: Multi-head DINT Attention. DIFF Attention matrix implements reducing attention noise, while the Integration Attention matrix enhances global attention."
429
+ ],
430
+ "image_footnote": [],
431
+ "bbox": [
432
+ 159,
433
+ 85,
434
+ 509,
435
+ 353
436
+ ],
437
+ "page_idx": 2
438
+ },
439
+ {
440
+ "type": "image",
441
+ "img_path": "images/7fe408db740e492ca4c65c75575a10fb090b5c9e260f8c7dede65700cf7ff21c.jpg",
442
+ "image_caption": [],
443
+ "image_footnote": [],
444
+ "bbox": [
445
+ 509,
446
+ 80,
447
+ 845,
448
+ 350
449
+ ],
450
+ "page_idx": 2
451
+ },
452
+ {
453
+ "type": "image",
454
+ "img_path": "images/480685ce65be66fd92181b7ad59764bbfb94fd2ac8eaca3c5519cf0f37bc2c52.jpg",
455
+ "image_caption": [
456
+ "(a) Scaling model size ranging from 830M to 13B."
457
+ ],
458
+ "image_footnote": [],
459
+ "bbox": [
460
+ 114,
461
+ 403,
462
+ 465,
463
+ 599
464
+ ],
465
+ "page_idx": 2
466
+ },
467
+ {
468
+ "type": "image",
469
+ "img_path": "images/9a08374fb58333f784fbb45c631712cfff30c532a9f971665336402eb27bc4c6.jpg",
470
+ "image_caption": [
471
+ "(b) Scaling number of training tokens for 3B models.",
472
+ "Figure 3: Language modeling loss of scaling up parameter count and training tokens. DINT Transformer outperforms other models, demonstrating that it requires fewer parameters or tokens to achieve comparable performance. (a) DINT Transformer matches the performance of larger models with fewer parameters. (b) DINT Transformer achieves comparable performance using significantly fewer training tokens."
473
+ ],
474
+ "image_footnote": [],
475
+ "bbox": [
476
+ 532,
477
+ 403,
478
+ 884,
479
+ 599
480
+ ],
481
+ "page_idx": 2
482
+ },
483
+ {
484
+ "type": "text",
485
+ "text": "normalization in the final attention matrix. The signal attention matrix $A_{1}$ is computed using $Q_{1}$ and $K_{1}$ :",
486
+ "bbox": [
487
+ 112,
488
+ 715,
489
+ 487,
490
+ 764
491
+ ],
492
+ "page_idx": 2
493
+ },
494
+ {
495
+ "type": "equation",
496
+ "text": "\n$$\nA _ {1} = \\operatorname {s o f t m a x} \\left(\\frac {Q _ {1} K _ {1} ^ {\\top}}{\\sqrt {d}}\\right). \\tag {4}\n$$\n",
497
+ "text_format": "latex",
498
+ "bbox": [
499
+ 198,
500
+ 782,
501
+ 487,
502
+ 819
503
+ ],
504
+ "page_idx": 2
505
+ },
506
+ {
507
+ "type": "text",
508
+ "text": "The integral component computes global importance scores by column-wise averaging of the signal attention weights. Crucially, to prevent information leakage, the averaging operation only considers tokens preceding the current token in the",
509
+ "bbox": [
510
+ 112,
511
+ 841,
512
+ 489,
513
+ 921
514
+ ],
515
+ "page_idx": 2
516
+ },
517
+ {
518
+ "type": "text",
519
+ "text": "sequence.",
520
+ "bbox": [
521
+ 509,
522
+ 717,
523
+ 586,
524
+ 731
525
+ ],
526
+ "page_idx": 2
527
+ },
528
+ {
529
+ "type": "equation",
530
+ "text": "\n$$\nG [ n,: ] = \\frac {1}{n} \\sum_ {m = 1} ^ {n} A _ {1} [ m,: ], \\tag {5}\n$$\n",
531
+ "text_format": "latex",
532
+ "bbox": [
533
+ 596,
534
+ 737,
535
+ 882,
536
+ 778
537
+ ],
538
+ "page_idx": 2
539
+ },
540
+ {
541
+ "type": "text",
542
+ "text": "where $G\\in \\mathbb{R}^{N\\times N}$",
543
+ "bbox": [
544
+ 507,
545
+ 784,
546
+ 653,
547
+ 799
548
+ ],
549
+ "page_idx": 2
550
+ },
551
+ {
552
+ "type": "text",
553
+ "text": "DINT attention operator computes the output as:",
554
+ "bbox": [
555
+ 524,
556
+ 801,
557
+ 882,
558
+ 816
559
+ ],
560
+ "page_idx": 2
561
+ },
562
+ {
563
+ "type": "equation",
564
+ "text": "\n$$\n\\operatorname {D I N T A t t n} (X) = \\left(A _ {\\text {d i f f}} + \\gamma \\cdot \\operatorname {s o f t m a x} (G)\\right) V, \\tag {6}\n$$\n",
565
+ "text_format": "latex",
566
+ "bbox": [
567
+ 519,
568
+ 822,
569
+ 882,
570
+ 854
571
+ ],
572
+ "page_idx": 2
573
+ },
574
+ {
575
+ "type": "text",
576
+ "text": "where $\\gamma$ is a learnable scalar following DIFF Transformer, $A_{\\mathrm{diff}}$ is DIFF attention component.",
577
+ "bbox": [
578
+ 507,
579
+ 857,
580
+ 882,
581
+ 888
582
+ ],
583
+ "page_idx": 2
584
+ },
585
+ {
586
+ "type": "text",
587
+ "text": "Unified Parameter Setting. By setting $\\lambda$ and $\\gamma$ to the same value, we ensure that the final attention",
588
+ "bbox": [
589
+ 507,
590
+ 889,
591
+ 880,
592
+ 919
593
+ ],
594
+ "page_idx": 2
595
+ },
596
+ {
597
+ "type": "page_number",
598
+ "text": "9815",
599
+ "bbox": [
600
+ 480,
601
+ 927,
602
+ 519,
603
+ 940
604
+ ],
605
+ "page_idx": 2
606
+ },
607
+ {
608
+ "type": "text",
609
+ "text": "matrix $A_{\\mathrm{final}}$ has rows that sum to 1. This row normalization guarantees numerical stability and consistency across the model, thusintaining data stability throughout the layers. This unified setting follows the parameterization method used in DIFF Transformer, further enhancing training stability.",
610
+ "bbox": [
611
+ 112,
612
+ 84,
613
+ 487,
614
+ 181
615
+ ],
616
+ "page_idx": 3
617
+ },
618
+ {
619
+ "type": "text",
620
+ "text": "2.3 Multi-Head Differential Attention",
621
+ "text_level": 1,
622
+ "bbox": [
623
+ 112,
624
+ 191,
625
+ 426,
626
+ 205
627
+ ],
628
+ "page_idx": 3
629
+ },
630
+ {
631
+ "type": "text",
632
+ "text": "We also use the multi-head mechanism in DINT Transformer. Let $h$ denote the number of attention heads. We use different projection matrices $W_{Q}^{i}, W_{K}^{i}, W_{V}^{i}, i \\in [1, h]$ for the heads. The scalar $\\lambda$ is shared between heads within the same layer. Then the head outputs are normalized and projected to the final results as follows:",
633
+ "bbox": [
634
+ 112,
635
+ 211,
636
+ 485,
637
+ 325
638
+ ],
639
+ "page_idx": 3
640
+ },
641
+ {
642
+ "type": "equation",
643
+ "text": "\n$$\n\\operatorname {h e a d} _ {i} = \\operatorname {D i f f A t t n} \\left(X; W _ {Q} ^ {i}, W _ {K} ^ {i}, W _ {V} ^ {i}, \\lambda\\right) \\tag {7}\n$$\n",
644
+ "text_format": "latex",
645
+ "bbox": [
646
+ 139,
647
+ 338,
648
+ 487,
649
+ 357
650
+ ],
651
+ "page_idx": 3
652
+ },
653
+ {
654
+ "type": "equation",
655
+ "text": "\n$$\n\\overline {{\\operatorname {h e a d} _ {i}}} = \\operatorname {L N} \\left(\\operatorname {h e a d} _ {i}\\right) \\tag {8}\n$$\n",
656
+ "text_format": "latex",
657
+ "bbox": [
658
+ 223,
659
+ 369,
660
+ 485,
661
+ 386
662
+ ],
663
+ "page_idx": 3
664
+ },
665
+ {
666
+ "type": "text",
667
+ "text": "$\\operatorname{MultiHead}(X) = \\operatorname{Concat}(\\overline{\\operatorname{head}}_1, \\dots, \\overline{\\operatorname{head}}_h) W_O$ (9)",
668
+ "bbox": [
669
+ 115,
670
+ 393,
671
+ 485,
672
+ 425
673
+ ],
674
+ "page_idx": 3
675
+ },
676
+ {
677
+ "type": "text",
678
+ "text": "where $W_O \\in \\mathbb{R}^{d_{\\mathrm{model}} \\times d_{\\mathrm{model}}}$ is a learnable projection matrix, $\\mathrm{LN}(\\cdot)$ uses RMSNorm for each head, and $\\mathrm{Concat}(\\cdot)$ concatenates the heads together along the channel dimension. Unlike DIFF Transformer, we do not apply an additional multiplier to the outputs of each head, as the unified parameter setting in DINT Transformer already ensures numerical stability and consistency. The number of heads is set as $h = d_{\\mathrm{model}} / 2d$ , where $d$ is the head dimension of the Transformer, to ensure that the parameter count and computational complexity are aligned.",
679
+ "bbox": [
680
+ 112,
681
+ 426,
682
+ 487,
683
+ 618
684
+ ],
685
+ "page_idx": 3
686
+ },
687
+ {
688
+ "type": "text",
689
+ "text": "Headwise Normalization. Figure 2 illustrates the use of GroupNorm(Wu and He, 2018) within the attention mechanism to stabilize training. Although Layer Normalization (LN) is applied independently to each attention head, the sparse nature of differential attention often leads to varied statistical patterns across heads. By normalizing each head individually before the concatenation step, LN ensures more consistent gradient statistics, which contributes to improved training stability(Qin et al., 2022; Wang et al., 2023).",
690
+ "bbox": [
691
+ 112,
692
+ 619,
693
+ 489,
694
+ 796
695
+ ],
696
+ "page_idx": 3
697
+ },
698
+ {
699
+ "type": "text",
700
+ "text": "2.4 Overall Architecture",
701
+ "text_level": 1,
702
+ "bbox": [
703
+ 112,
704
+ 807,
705
+ 324,
706
+ 821
707
+ ],
708
+ "page_idx": 3
709
+ },
710
+ {
711
+ "type": "text",
712
+ "text": "The overall architecture stacks $L$ layers, where each layer contains a multihead differential attention module and a feedforward network module. We describe DINT Transformer layer as:",
713
+ "bbox": [
714
+ 112,
715
+ 827,
716
+ 485,
717
+ 891
718
+ ],
719
+ "page_idx": 3
720
+ },
721
+ {
722
+ "type": "equation",
723
+ "text": "\n$$\nY ^ {l} = \\operatorname {M u l t i H e a d} \\left(\\mathrm {L N} \\left(X ^ {l}\\right)\\right) + X ^ {l} \\tag {10}\n$$\n",
724
+ "text_format": "latex",
725
+ "bbox": [
726
+ 174,
727
+ 903,
728
+ 485,
729
+ 921
730
+ ],
731
+ "page_idx": 3
732
+ },
733
+ {
734
+ "type": "equation",
735
+ "text": "\n$$\nX ^ {l + 1} = \\operatorname {S w i G L U} \\left(\\operatorname {L N} \\left(Y ^ {l}\\right)\\right) + Y ^ {l} \\tag {11}\n$$\n",
736
+ "text_format": "latex",
737
+ "bbox": [
738
+ 569,
739
+ 82,
740
+ 882,
741
+ 99
742
+ ],
743
+ "page_idx": 3
744
+ },
745
+ {
746
+ "type": "text",
747
+ "text": "where $\\mathrm{LN}(\\cdot)$ is RMSNorm, and $\\operatorname{SwiGLU}(X)$ is defined as:",
748
+ "bbox": [
749
+ 507,
750
+ 108,
751
+ 880,
752
+ 139
753
+ ],
754
+ "page_idx": 3
755
+ },
756
+ {
757
+ "type": "equation",
758
+ "text": "\n$$\n\\operatorname {S w i G L U} (X) = \\left(\\operatorname {s w i s h} \\left(X W _ {G}\\right) \\odot X W _ {1}\\right) W _ {2},\n$$\n",
759
+ "text_format": "latex",
760
+ "bbox": [
761
+ 524,
762
+ 153,
763
+ 863,
764
+ 170
765
+ ],
766
+ "page_idx": 3
767
+ },
768
+ {
769
+ "type": "text",
770
+ "text": "where $W_{G}, W_{1} \\in \\mathbb{R}^{d_{\\mathrm{model}} \\times \\frac{8}{3} d_{\\mathrm{model}}}$ , and $W_{2} \\in \\mathbb{R}^{\\frac{8}{3} d_{\\mathrm{model}} \\times d_{\\mathrm{model}}}$ are learnable matrices.",
771
+ "bbox": [
772
+ 507,
773
+ 183,
774
+ 880,
775
+ 216
776
+ ],
777
+ "page_idx": 3
778
+ },
779
+ {
780
+ "type": "text",
781
+ "text": "3 Experiments",
782
+ "text_level": 1,
783
+ "bbox": [
784
+ 507,
785
+ 230,
786
+ 655,
787
+ 244
788
+ ],
789
+ "page_idx": 3
790
+ },
791
+ {
792
+ "type": "text",
793
+ "text": "In this study, we evaluate DINT Transformer through a series of experiments, comparing it with DIFF Transformer and other baseline models. Since DINT Transformer does not introduce new learnable parameters, only increasing computational complexity, its parameter count remains unchanged. Therefore, the model configurations used in the comparison were chosen to be the same as those of DIFF Transformer. Our experiments show that by enhancing attention to globally significant tokens, DINT Transformer effectively reduces attention noise. Additionally, DINT Transformer exhibits stronger stability compared to DIFF Transformer, leading to improved performance across tasks such as long-sequence modeling, key information retrieval, and in-context learning.",
794
+ "bbox": [
795
+ 505,
796
+ 255,
797
+ 884,
798
+ 513
799
+ ],
800
+ "page_idx": 3
801
+ },
802
+ {
803
+ "type": "text",
804
+ "text": "3.1 Language Modeling Evaluation",
805
+ "text_level": 1,
806
+ "bbox": [
807
+ 507,
808
+ 524,
809
+ 803,
810
+ 539
811
+ ],
812
+ "page_idx": 3
813
+ },
814
+ {
815
+ "type": "text",
816
+ "text": "We trained a 3B DINT Transformer language model using the same configuration settings as the 3B DIFF Transformer language model. The model settings are shown in Table 1.",
817
+ "bbox": [
818
+ 507,
819
+ 545,
820
+ 880,
821
+ 608
822
+ ],
823
+ "page_idx": 3
824
+ },
825
+ {
826
+ "type": "table",
827
+ "img_path": "images/ba2c1257118fe419a1df8a2b81b7e6a25b1912b8d2a3f364cec1e2626a734cd4.jpg",
828
+ "table_caption": [],
829
+ "table_footnote": [],
830
+ "table_body": "<table><tr><td>Params</td><td>Values</td></tr><tr><td>Layers</td><td>28</td></tr><tr><td>Hidden size</td><td>3072</td></tr><tr><td>FFN size</td><td>8192</td></tr><tr><td>Vocab size</td><td>100,288</td></tr><tr><td>Heads</td><td>12</td></tr><tr><td>Adam β</td><td>(0.9, 0.95)</td></tr><tr><td>LR</td><td>3.2 × 10-4</td></tr><tr><td>Batch size</td><td>4M</td></tr><tr><td>Warmup steps</td><td>1000</td></tr><tr><td>Weight decay</td><td>0.1</td></tr><tr><td>Dropout</td><td>0.0</td></tr></table>",
831
+ "bbox": [
832
+ 579,
833
+ 619,
834
+ 811,
835
+ 818
836
+ ],
837
+ "page_idx": 3
838
+ },
839
+ {
840
+ "type": "text",
841
+ "text": "Table 1: Configuration settings used for the 3B-size DINT Transformer and DIFF Transformer models.",
842
+ "bbox": [
843
+ 507,
844
+ 827,
845
+ 880,
846
+ 854
847
+ ],
848
+ "page_idx": 3
849
+ },
850
+ {
851
+ "type": "text",
852
+ "text": "Results. Table 2 presents the zero-shot evaluation results on the LM Eval Harness benchmark (Gao et al., 2023). We compare DINT Transformer",
853
+ "bbox": [
854
+ 507,
855
+ 873,
856
+ 882,
857
+ 921
858
+ ],
859
+ "page_idx": 3
860
+ },
861
+ {
862
+ "type": "page_number",
863
+ "text": "9816",
864
+ "bbox": [
865
+ 480,
866
+ 927,
867
+ 519,
868
+ 940
869
+ ],
870
+ "page_idx": 3
871
+ },
872
+ {
873
+ "type": "text",
874
+ "text": "with other state-of-the-art Transformer-based models, including OpenLLaMA-v2-3B (Geng and Liu, 2023), StableLM-base-alpha-3B-v2 (Tow, 2023), and StableLM-3B-4E1T (Tow et al., 2023). All models were trained on 1T tokens under identical configurations to ensure fair comparison. The results demonstrate that DINT Transformer not only outperforms these baselines across all downstream tasks but also exhibits superior stability.",
875
+ "bbox": [
876
+ 112,
877
+ 84,
878
+ 489,
879
+ 230
880
+ ],
881
+ "page_idx": 4
882
+ },
883
+ {
884
+ "type": "text",
885
+ "text": "3.2 Scalability Compared with Transformer",
886
+ "text_level": 1,
887
+ "bbox": [
888
+ 112,
889
+ 240,
890
+ 475,
891
+ 256
892
+ ],
893
+ "page_idx": 4
894
+ },
895
+ {
896
+ "type": "text",
897
+ "text": "We evaluated the scalability of DINT Transformer compared to the standard Transformer and DIFF Transformer, specifically focusing on language modeling tasks. This evaluation involved scaling both model size and the number of training tokens. We adopted an enhanced Transformer architecture similar to LLaMA, ensuring a fair comparison by using identical experimental setups.",
898
+ "bbox": [
899
+ 112,
900
+ 261,
901
+ 487,
902
+ 388
903
+ ],
904
+ "page_idx": 4
905
+ },
906
+ {
907
+ "type": "text",
908
+ "text": "Scaling Model Size As shown in Figure 3(a), DINT Transformer consistently outperformed both Transformer and DIFF Transformer across various model sizes (see Table 3 for model configurations). Specifically, DINT Transformer achieved comparable validation loss to the Transformer with $44\\%$ fewer parameters and matched the performance of DIFF Transformer with $29\\%$ fewer parameters. This demonstrates the superior efficiency and scalability of DINT Transformer in terms of parameter usage.",
909
+ "bbox": [
910
+ 112,
911
+ 390,
912
+ 489,
913
+ 565
914
+ ],
915
+ "page_idx": 4
916
+ },
917
+ {
918
+ "type": "text",
919
+ "text": "Scaling Training Tokens Figure 3(b) shows the results of scaling the number of training tokens. The fitted curves indicate that DINT Transformer achieved comparable performance to the Transformer with $33\\%$ fewer training tokens. Additionally, DINT Transformer outperformed DIFF Transformer with $16\\%$ fewer training tokens. These results highlight the significant data efficiency of DINT Transformer, achieving equivalent or superior results with considerably fewer resources.",
920
+ "bbox": [
921
+ 112,
922
+ 568,
923
+ 489,
924
+ 728
925
+ ],
926
+ "page_idx": 4
927
+ },
928
+ {
929
+ "type": "text",
930
+ "text": "3.3 Key Information Retrieval",
931
+ "text_level": 1,
932
+ "bbox": [
933
+ 112,
934
+ 739,
935
+ 369,
936
+ 753
937
+ ],
938
+ "page_idx": 4
939
+ },
940
+ {
941
+ "type": "text",
942
+ "text": "The Needle-In-A-Haystack test (Kamradt, 2023) is used to evaluate the ability of models to extract key information from long contexts. Following the protocol of LWM (Liu et al., 2024) and Gemini 1.5 (Reid et al., 2024), \"needles\" are short sentences that assign a unique number to a city. The objective is to retrieve these numbers based on a given query.",
943
+ "bbox": [
944
+ 112,
945
+ 760,
946
+ 487,
947
+ 872
948
+ ],
949
+ "page_idx": 4
950
+ },
951
+ {
952
+ "type": "text",
953
+ "text": "We position the answer needle at different depths within the context (0%, 25%, 50%, 75%, 100%), while other needles are placed randomly. Each",
954
+ "bbox": [
955
+ 112,
956
+ 873,
957
+ 487,
958
+ 921
959
+ ],
960
+ "page_idx": 4
961
+ },
962
+ {
963
+ "type": "text",
964
+ "text": "combination of depth and context length is evaluated over 50 samples, and the average accuracy is reported.",
965
+ "bbox": [
966
+ 507,
967
+ 84,
968
+ 884,
969
+ 131
970
+ ],
971
+ "page_idx": 4
972
+ },
973
+ {
974
+ "type": "text",
975
+ "text": "Retrieve from 4K Context Length We evaluated the multi-needle retrieval task using 4K-length contexts, inserting $N = 1,2,4,6$ needles and retrieving $R = 1,2$ needles. The models used for evaluation were trained with an input length of 4K. As shown in Table 4, DINT Transformer consistently outperforms the other models. Particularly at $N = 6,R = 2$ , DINT achieves an accuracy of 0.88, significantly better than Transformer and DIFF models, indicating its superior ability to retrieve key information amidst distracting contexts.",
976
+ "bbox": [
977
+ 507,
978
+ 131,
979
+ 884,
980
+ 309
981
+ ],
982
+ "page_idx": 4
983
+ },
984
+ {
985
+ "type": "text",
986
+ "text": "Retrieve from 64K Context Length As shown in Figure 4, the context lengths evaluated range from 8K to 64K, with the configuration set to $N = 8$ , $R = 1$ . We evaluated the 3B-scale model with extended context (as described in Section 3.3). The accuracy is reported across different answer needle depths (y-axis) and context lengths (x-axis). The bottom row shows the average accuracy across all depths. From the figure, it can be observed that DINT Transformer consistently performs well across varying context lengths and needle depths. Notably, at a 40K context length and $25\\%$ needle depth, DINT Transformer shows a $52\\%$ improvement in accuracy compared to Transformer and a $12\\%$ improvement compared to DIFF Transformer.",
987
+ "bbox": [
988
+ 507,
989
+ 310,
990
+ 884,
991
+ 551
992
+ ],
993
+ "page_idx": 4
994
+ },
995
+ {
996
+ "type": "text",
997
+ "text": "Attention Score Analysis Table 5 presents the attention scores assigned to the answer span and the noise context in the key information retrieval task. These scores reflect the model's ability to focus on relevant information while ignoring irrelevant noise. We compare the normalized attention scores for different depths (i.e., positions) of the target answer within the context. The results show that DINT Transformer allocates significantly higher attention to the correct answer span and exhibits a substantial reduction in attention noise.",
998
+ "bbox": [
999
+ 507,
1000
+ 552,
1001
+ 882,
1002
+ 727
1003
+ ],
1004
+ "page_idx": 4
1005
+ },
1006
+ {
1007
+ "type": "text",
1008
+ "text": "3.4 In-Context Learning",
1009
+ "text_level": 1,
1010
+ "bbox": [
1011
+ 507,
1012
+ 739,
1013
+ 719,
1014
+ 755
1015
+ ],
1016
+ "page_idx": 4
1017
+ },
1018
+ {
1019
+ "type": "text",
1020
+ "text": "We investigate in-context learning from two main angles: the performance on many-shot classification tasks and the model's ability to maintain robustness when utilizing context. In-context learning is an essential trait of language models, reflecting their capability to make effective use of the provided input context.",
1021
+ "bbox": [
1022
+ 507,
1023
+ 760,
1024
+ 882,
1025
+ 872
1026
+ ],
1027
+ "page_idx": 4
1028
+ },
1029
+ {
1030
+ "type": "text",
1031
+ "text": "Many-Shot In-Context Learning As presented in Figure 5, we compare the accuracy of many-shot classification between DIFF Transformer and",
1032
+ "bbox": [
1033
+ 507,
1034
+ 873,
1035
+ 882,
1036
+ 920
1037
+ ],
1038
+ "page_idx": 4
1039
+ },
1040
+ {
1041
+ "type": "page_number",
1042
+ "text": "9817",
1043
+ "bbox": [
1044
+ 480,
1045
+ 927,
1046
+ 519,
1047
+ 940
1048
+ ],
1049
+ "page_idx": 4
1050
+ },
1051
+ {
1052
+ "type": "table",
1053
+ "img_path": "images/10a7a295bc1982ed77033c0e7973ac91fc35fd7a574e82bd81017cc34a75adc2.jpg",
1054
+ "table_caption": [],
1055
+ "table_footnote": [],
1056
+ "table_body": "<table><tr><td>Model</td><td>ARC-C</td><td>ARC-E</td><td>BoolQ</td><td>HellaSwag</td><td>OBQA</td><td>PIQA</td><td>WinoGrande</td><td>Avg</td></tr><tr><td>OpenLLaMA-3B-v2</td><td>33.9</td><td>67.6</td><td>65.7</td><td>70.0</td><td>26.6</td><td>76.7</td><td>62.9</td><td>57.5</td></tr><tr><td>StableLM-base-alpha-3B-v2</td><td>32.4</td><td>67.3</td><td>64.6</td><td>68.6</td><td>27.1</td><td>76.0</td><td>63.0</td><td>57.0</td></tr><tr><td>StableLM-3B-4E1T</td><td>-</td><td>66.6</td><td>-</td><td>-</td><td>-</td><td>76.8</td><td>63.2</td><td>-</td></tr><tr><td>DIFF-3B</td><td>36.9 ± 2.1</td><td>72.6 ± 1.7</td><td>69.2 ± 1.8</td><td>71.1 ± 2.4</td><td>29.1 ± 0.8</td><td>76.5 ± 1.0</td><td>69.2 ± 2.0</td><td>60.6</td></tr><tr><td>DINT-3B</td><td>39.2 ± 1.7</td><td>74.3 ± 1.3</td><td>70.7 ± 1.2</td><td>72.6 ± 1.7</td><td>30.3 ± 0.5</td><td>77.3 ± 0.6</td><td>72.0 ± 1.2</td><td>62.3</td></tr></table>",
1057
+ "bbox": [
1058
+ 115,
1059
+ 80,
1060
+ 885,
1061
+ 164
1062
+ ],
1063
+ "page_idx": 5
1064
+ },
1065
+ {
1066
+ "type": "text",
1067
+ "text": "Table 2: Eval Harness accuracy compared with well-trained Transformer language models. The results indicate the superior performance of DINT Transformer over other models across a range of tasks.",
1068
+ "bbox": [
1069
+ 112,
1070
+ 171,
1071
+ 882,
1072
+ 202
1073
+ ],
1074
+ "page_idx": 5
1075
+ },
1076
+ {
1077
+ "type": "image",
1078
+ "img_path": "images/65797ce23fe06f57dab39f6766fae2cb20d0451f232d71c8b7501ec43c1ff022.jpg",
1079
+ "image_caption": [
1080
+ "Figure 4: Multi-needle retrieval results in 64K length."
1081
+ ],
1082
+ "image_footnote": [],
1083
+ "bbox": [
1084
+ 149,
1085
+ 211,
1086
+ 848,
1087
+ 370
1088
+ ],
1089
+ "page_idx": 5
1090
+ },
1091
+ {
1092
+ "type": "image",
1093
+ "img_path": "images/34def1c886986bfdf2ed6c21043ec26f7630e459c2700ec8815cf001773a4b4b.jpg",
1094
+ "image_caption": [
1095
+ "(a) TREC dataset (6 classes)"
1096
+ ],
1097
+ "image_footnote": [],
1098
+ "bbox": [
1099
+ 174,
1100
+ 432,
1101
+ 489,
1102
+ 608
1103
+ ],
1104
+ "page_idx": 5
1105
+ },
1106
+ {
1107
+ "type": "image",
1108
+ "img_path": "images/2a3a072d61bca3ef79a89f07338c944c07f1d8b8da998d7b7d494fee68b701b3.jpg",
1109
+ "image_caption": [
1110
+ "(b) TREC-fine dataset (50 classes)"
1111
+ ],
1112
+ "image_footnote": [],
1113
+ "bbox": [
1114
+ 505,
1115
+ 432,
1116
+ 821,
1117
+ 609
1118
+ ],
1119
+ "page_idx": 5
1120
+ },
1121
+ {
1122
+ "type": "image",
1123
+ "img_path": "images/dfaaa09edd7b40873909753120f4e32211571bd4b7e4be95d12b01c270c9131d.jpg",
1124
+ "image_caption": [
1125
+ "(c) Banking-77 dataset (77 classes)"
1126
+ ],
1127
+ "image_footnote": [],
1128
+ "bbox": [
1129
+ 176,
1130
+ 639,
1131
+ 489,
1132
+ 816
1133
+ ],
1134
+ "page_idx": 5
1135
+ },
1136
+ {
1137
+ "type": "image",
1138
+ "img_path": "images/740297a760cc81126bbf6e43d425e3cfbdeab7b1782fb8cb0228f921c6e8eaeb.jpg",
1139
+ "image_caption": [
1140
+ "(d) Clinic-150 dataset (150 classes)",
1141
+ "Figure 5: Accuracy of many-shot in-context learning across four datasets, with demonstration examples increasing from 1-shot up to a total of 64K tokens. The dashed lines indicate the average accuracy once the model's performance stabilizes."
1142
+ ],
1143
+ "image_footnote": [],
1144
+ "bbox": [
1145
+ 507,
1146
+ 640,
1147
+ 821,
1148
+ 818
1149
+ ],
1150
+ "page_idx": 5
1151
+ },
1152
+ {
1153
+ "type": "page_number",
1154
+ "text": "9818",
1155
+ "bbox": [
1156
+ 480,
1157
+ 928,
1158
+ 519,
1159
+ 940
1160
+ ],
1161
+ "page_idx": 5
1162
+ },
1163
+ {
1164
+ "type": "image",
1165
+ "img_path": "images/a1ecdebe228fec7fb1e1ba60fa89203a44d762688b66f9150a21f07f26b4b063.jpg",
1166
+ "image_caption": [
1167
+ "(a) Examples are randomly arranged."
1168
+ ],
1169
+ "image_footnote": [],
1170
+ "bbox": [
1171
+ 110,
1172
+ 80,
1173
+ 489,
1174
+ 277
1175
+ ],
1176
+ "page_idx": 6
1177
+ },
1178
+ {
1179
+ "type": "image",
1180
+ "img_path": "images/e10a43e12a60e35af6dff3d0a00fdde35b52e9e2d3008842f0f2ef78b1777c75.jpg",
1181
+ "image_caption": [
1182
+ "(b) Examples are arranged alternately by class.",
1183
+ "Figure 6: Many-shot in-context learning accuracy on four datasets. The accuracy for both DIFF Transformer and DINT (Ours) models is presented, showing performance improvements across different numbers of demonstration samples."
1184
+ ],
1185
+ "image_footnote": [],
1186
+ "bbox": [
1187
+ 507,
1188
+ 79,
1189
+ 884,
1190
+ 275
1191
+ ],
1192
+ "page_idx": 6
1193
+ },
1194
+ {
1195
+ "type": "table",
1196
+ "img_path": "images/074b62a6a21bd38d397acb1194075522f23751838a2c90ae6eddfcbd54c3b60b.jpg",
1197
+ "table_caption": [],
1198
+ "table_footnote": [],
1199
+ "table_body": "<table><tr><td>Size</td><td>Hidden Dim.</td><td>#Layers</td><td>#Heads</td></tr><tr><td>830M</td><td>1536</td><td>24</td><td>8</td></tr><tr><td>1.4B</td><td>2048</td><td>24</td><td>8</td></tr><tr><td>2.8B</td><td>2560</td><td>32</td><td>10</td></tr><tr><td>6.8B</td><td>4096</td><td>32</td><td>16</td></tr><tr><td>13.1B</td><td>5120</td><td>40</td><td>20</td></tr></table>",
1200
+ "bbox": [
1201
+ 124,
1202
+ 376,
1203
+ 475,
1204
+ 475
1205
+ ],
1206
+ "page_idx": 6
1207
+ },
1208
+ {
1209
+ "type": "table",
1210
+ "img_path": "images/0450518cf89ee4fdef0c68b5a04d2e5e5a496e25ff86f387edc801d5950968b3.jpg",
1211
+ "table_caption": [
1212
+ "Table 3: Model configurations for different sizes, including hidden dimension, number of layers, and number of attention heads. Each model was trained with a sequence length of 2048 and a batch size of 0.25 million tokens, for a total of 40K training steps."
1213
+ ],
1214
+ "table_footnote": [],
1215
+ "table_body": "<table><tr><td rowspan=\"2\">Model</td><td>N=1</td><td>N=2</td><td>N=4</td><td>N=6</td></tr><tr><td>R=1</td><td>R=2</td><td>R=2</td><td>R=2</td></tr><tr><td>Transformer</td><td>1.00</td><td>0.85</td><td>0.62</td><td>0.55</td></tr><tr><td>DIFF</td><td>1.00</td><td>0.92</td><td>0.84</td><td>0.85</td></tr><tr><td>DINT</td><td>1.00</td><td>0.96</td><td>0.89</td><td>0.88</td></tr></table>",
1216
+ "bbox": [
1217
+ 127,
1218
+ 574,
1219
+ 475,
1220
+ 648
1221
+ ],
1222
+ "page_idx": 6
1223
+ },
1224
+ {
1225
+ "type": "text",
1226
+ "text": "Table 4: Multi-needle retrieval accuracy in 4K length contexts, averaged over the answer needle positions. $N$ represents the number of needles, and $R$ denotes the number of query cities.",
1227
+ "bbox": [
1228
+ 112,
1229
+ 658,
1230
+ 487,
1231
+ 715
1232
+ ],
1233
+ "page_idx": 6
1234
+ },
1235
+ {
1236
+ "type": "text",
1237
+ "text": "our DINT Transformer architecture. We evaluate the 3B-size language models that support 64K input length. We follow the evaluation protocol of (Bertsch et al., 2024) and use constrained decoding (Ratner et al., 2023). The number of demonstration samples is incrementally increased from 1-shot until the total length reaches 64K tokens. Specifically, we evaluate the models on the following datasets: TREC (Hovy et al., 2001) with 50 classes, Banking-77 (Casanueva et al., 2020) with 77 classes, and Clinic-150 (Larson et al., 2019) with 150 classes.",
1238
+ "bbox": [
1239
+ 112,
1240
+ 744,
1241
+ 490,
1242
+ 921
1243
+ ],
1244
+ "page_idx": 6
1245
+ },
1246
+ {
1247
+ "type": "text",
1248
+ "text": "The results show that DINT Transformer consistently outperforms DIFF Transformer across all datasets and varying numbers of demonstration samples. The improvement in average accuracy is substantial, with DINT achieving $2.8\\%$ higher accuracy on TREC, $4.1\\%$ on TREC-Fine, $4.3\\%$ on Banking-77, and $1.8\\%$ on Clinic-150.",
1249
+ "bbox": [
1250
+ 507,
1251
+ 379,
1252
+ 884,
1253
+ 491
1254
+ ],
1255
+ "page_idx": 6
1256
+ },
1257
+ {
1258
+ "type": "text",
1259
+ "text": "Robustness of In-Context Learning Figure 6 presents a comparison of the robustness between DIFF Transformer and DINT Transformer in the context of in-context learning. By analyzing how performance varies with different order permutations of the same set of demonstration examples, we find that smaller performance fluctuations reflect greater robustness and a reduced risk of catastrophic degradation. The evaluation protocol remains consistent with the previously outlined methodology. Figure 6 displays the results of this analysis on the TREC dataset. We examine two prompt configurations: randomly shuffled examples and examples arranged by class in an alternating pattern. In both configurations, DINT Transformer consistently shows smaller performance fluctuations compared to DIFF Transformer, demonstrating that our approach enhances robustness in in-context learning tasks.",
1260
+ "bbox": [
1261
+ 507,
1262
+ 493,
1263
+ 884,
1264
+ 799
1265
+ ],
1266
+ "page_idx": 6
1267
+ },
1268
+ {
1269
+ "type": "text",
1270
+ "text": "3.5 Ablation Studies",
1271
+ "text_level": 1,
1272
+ "bbox": [
1273
+ 509,
1274
+ 816,
1275
+ 687,
1276
+ 831
1277
+ ],
1278
+ "page_idx": 6
1279
+ },
1280
+ {
1281
+ "type": "text",
1282
+ "text": "We perform ablation studies using 1.4B-parameter language models, with the same training setup as the 1.4B model in Section 3.2. Both models have 24 layers, with 16 attention heads for Transformer and 8 for DIFF Transformer, each having a head",
1283
+ "bbox": [
1284
+ 507,
1285
+ 841,
1286
+ 882,
1287
+ 921
1288
+ ],
1289
+ "page_idx": 6
1290
+ },
1291
+ {
1292
+ "type": "page_number",
1293
+ "text": "9819",
1294
+ "bbox": [
1295
+ 480,
1296
+ 927,
1297
+ 519,
1298
+ 940
1299
+ ],
1300
+ "page_idx": 6
1301
+ },
1302
+ {
1303
+ "type": "table",
1304
+ "img_path": "images/ddee8108d4289803fa60ba4a382bbac425f3e7bab1d0decda8c0579c3bfcf583.jpg",
1305
+ "table_caption": [],
1306
+ "table_footnote": [],
1307
+ "table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"5\">Attention to Answer↑</td><td colspan=\"5\">Attention Noise↓</td></tr><tr><td>0%</td><td>25%</td><td>50%</td><td>75%</td><td>100%</td><td>0%</td><td>25%</td><td>50%</td><td>75%</td><td>100%</td></tr><tr><td>Transformer</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.07</td><td>0.09</td><td>0.51</td><td>0.54</td><td>0.52</td><td>0.49</td><td>0.49</td></tr><tr><td>DIFF</td><td>0.27</td><td>0.30</td><td>0.31</td><td>0.32</td><td>0.40</td><td>0.01</td><td>0.02</td><td>0.02</td><td>0.02</td><td>0.01</td></tr><tr><td>DINT (Ours)</td><td>0.35</td><td>0.38</td><td>0.40</td><td>0.41</td><td>0.45</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td></tr></table>",
1308
+ "bbox": [
1309
+ 161,
1310
+ 80,
1311
+ 833,
1312
+ 180
1313
+ ],
1314
+ "page_idx": 7
1315
+ },
1316
+ {
1317
+ "type": "table",
1318
+ "img_path": "images/ec61c605f3fd8f1c2df2ac490a8169e3fe39507a6426c20c2d19b7df1e53d7ed.jpg",
1319
+ "table_caption": [
1320
+ "Table 5: Attention scores allocated to answer spans and noise context in the key information retrieval task. The target answer is inserted at varying depths within the context. DINT Transformer allocates more attention to relevant information and effectively minimizes attention noise."
1321
+ ],
1322
+ "table_footnote": [],
1323
+ "table_body": "<table><tr><td rowspan=\"2\">Model</td><td rowspan=\"2\">#Heads</td><td rowspan=\"2\">d</td><td rowspan=\"2\">GN</td><td rowspan=\"2\">Valid. Set↓</td><td colspan=\"2\">Fine-Grained Slices</td></tr><tr><td>AR-Hit↓</td><td>Others↓</td></tr><tr><td>DIFF</td><td>8</td><td>128</td><td>✓</td><td>3.062</td><td>0.880</td><td>3.247</td></tr><tr><td>-GroupNorm</td><td>8</td><td>128</td><td>✗</td><td>3.122</td><td>0.911</td><td>3.309</td></tr><tr><td>with λinit = 0.8</td><td>8</td><td>128</td><td>✓</td><td>3.065</td><td>0.883</td><td>3.250</td></tr><tr><td>with λinit = 0.5</td><td>8</td><td>128</td><td>✓</td><td>3.066</td><td>0.882</td><td>3.251</td></tr><tr><td>DINT (Ours)</td><td>8</td><td>128</td><td>✓</td><td>3.055</td><td>0.875</td><td>3.243</td></tr><tr><td>-GroupNorm</td><td>8</td><td>128</td><td>✗</td><td>3.075</td><td>0.893</td><td>3.256</td></tr><tr><td>with λinit = 0.8</td><td>8</td><td>128</td><td>✓</td><td>3.056</td><td>0.877</td><td>3.245</td></tr><tr><td>with λinit = 0.5</td><td>8</td><td>128</td><td>✓</td><td>3.058</td><td>0.878</td><td>3.245</td></tr></table>",
1324
+ "bbox": [
1325
+ 201,
1326
+ 244,
1327
+ 796,
1328
+ 431
1329
+ ],
1330
+ "page_idx": 7
1331
+ },
1332
+ {
1333
+ "type": "text",
1334
+ "text": "Table 6: Ablation Studies of 1.4B-Size Models.",
1335
+ "bbox": [
1336
+ 334,
1337
+ 439,
1338
+ 658,
1339
+ 455
1340
+ ],
1341
+ "page_idx": 7
1342
+ },
1343
+ {
1344
+ "type": "text",
1345
+ "text": "dimension of 128.",
1346
+ "bbox": [
1347
+ 112,
1348
+ 481,
1349
+ 253,
1350
+ 495
1351
+ ],
1352
+ "page_idx": 7
1353
+ },
1354
+ {
1355
+ "type": "text",
1356
+ "text": "Table 6 reports the fine-grained loss on the validation set, breaking it into two components: \"AR-Hit\" and \"Others.\" \"AR-Hit\" evaluates the model's ability to recall previously seen n-grams, while \"Others\" represents tokens that are either frequent or not recalled from the context.",
1357
+ "bbox": [
1358
+ 112,
1359
+ 508,
1360
+ 489,
1361
+ 604
1362
+ ],
1363
+ "page_idx": 7
1364
+ },
1365
+ {
1366
+ "type": "text",
1367
+ "text": "As shown in Table 6, we performed ablation studies on various design choices in DINT Transformer and compared them with Transformer variants. All models are of similar size and training FLOPs for a fair comparison. The results indicate that our method outperforms DIFF Transformer in both overall loss and fine-grained loss. When GroupNorm is removed, the performance of DIFF Transformer is significantly affected, while DINT Transformer shows a smaller impact. This is because we ensure the row normalization of the attention matrix, which improves the model's overall robustness. Additionally, when using constant initialization for lambda, we observe a slight decrease in performance, but the model still maintains a high level of performance. This demonstrates the effectiveness of our initialization method and shows that the model is robust to different initialization choices.",
1368
+ "bbox": [
1369
+ 112,
1370
+ 615,
1371
+ 489,
1372
+ 920
1373
+ ],
1374
+ "page_idx": 7
1375
+ },
1376
+ {
1377
+ "type": "text",
1378
+ "text": "4 Conclusions",
1379
+ "text_level": 1,
1380
+ "bbox": [
1381
+ 509,
1382
+ 480,
1383
+ 650,
1384
+ 495
1385
+ ],
1386
+ "page_idx": 7
1387
+ },
1388
+ {
1389
+ "type": "text",
1390
+ "text": "We propose DINT Transformer, which integrates global attention statistics into DIFF Transformer to reduce noise and enhance focus on key words. This improves the model's ability to capture global information, ensuring better stability and scalability. Experiments show DINT Transformer excels in long-sequence modeling, key information retrieval, and in-context learning, making it highly promising for NLP tasks requiring global context awareness.",
1391
+ "bbox": [
1392
+ 507,
1393
+ 517,
1394
+ 885,
1395
+ 663
1396
+ ],
1397
+ "page_idx": 7
1398
+ },
1399
+ {
1400
+ "type": "text",
1401
+ "text": "5 Limitations",
1402
+ "text_level": 1,
1403
+ "bbox": [
1404
+ 509,
1405
+ 690,
1406
+ 645,
1407
+ 706
1408
+ ],
1409
+ "page_idx": 7
1410
+ },
1411
+ {
1412
+ "type": "text",
1413
+ "text": "While the integration mechanism in DINT Transformer has significantly improved model performance, this design inevitably introduces additional computational complexity. These computational characteristics present new optimization opportunities for large-scale model deployment, particularly when processing long-sequence inputs. Through our algorithm-system co-design approach, we are actively developing more efficient implementations to further enhance the computational efficiency of DINT Transformer.",
1414
+ "bbox": [
1415
+ 507,
1416
+ 728,
1417
+ 884,
1418
+ 904
1419
+ ],
1420
+ "page_idx": 7
1421
+ },
1422
+ {
1423
+ "type": "page_number",
1424
+ "text": "9820",
1425
+ "bbox": [
1426
+ 480,
1427
+ 927,
1428
+ 521,
1429
+ 940
1430
+ ],
1431
+ "page_idx": 7
1432
+ },
1433
+ {
1434
+ "type": "text",
1435
+ "text": "References",
1436
+ "text_level": 1,
1437
+ "bbox": [
1438
+ 115,
1439
+ 84,
1440
+ 213,
1441
+ 98
1442
+ ],
1443
+ "page_idx": 8
1444
+ },
1445
+ {
1446
+ "type": "list",
1447
+ "sub_type": "ref_text",
1448
+ "list_items": [
1449
+ "Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R. Gormley, and Graham Neubig. 2024. In-context learning with long-context models: An in-depth exploration. arXiv preprint, arXiv:2405.00200.",
1450
+ "Inigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. 2020. Efficient intent detection with dual sentence encoders. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38-45.",
1451
+ "L Gao, J Tow, B Abbasi, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, A Le Noac'h, and 1 others. 2023. A framework for few-shot language model evaluation, 12 2023. URL https://zenodo.org/records/10256836, 7.",
1452
+ "Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama. URL: https://github.com/openlm-research/open_llama.",
1453
+ "Eduard Hovy, Laurie Gerber, Ulf Hermjakob, ChinYew Lin, and Deepak Ravichandran. 2001. Toward semantics-based answer pinpointing. In Proceedings of the first international conference on Human language technology research.",
1454
+ "Greg Kamradt. 2023. Needle in a haystack - pressure testing llms. https://github.com/gkamradt/LLMTest_NeedleInAHaystack/tree/main.",
1455
+ "Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, and et al. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316.",
1456
+ "Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel. 2024. World model on million-length video and language with ringattention. arXiv preprint, arXiv:2402.08268.",
1457
+ "Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786.",
1458
+ "Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. 2022. The devil in linear transformer. arXiv preprint arXiv:2210.10340.",
1459
+ "Prajit Ramachandran, Barret Zoph, and Quoc V Le. 2017. Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941, 7(1):5.",
1460
+ "Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon"
1461
+ ],
1462
+ "bbox": [
1463
+ 115,
1464
+ 107,
1465
+ 489,
1466
+ 920
1467
+ ],
1468
+ "page_idx": 8
1469
+ },
1470
+ {
1471
+ "type": "list",
1472
+ "sub_type": "ref_text",
1473
+ "list_items": [
1474
+ "Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. Parallel context windows for large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, pages 6383-6402.",
1475
+ "Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, and et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint, arXiv:2403.05530.",
1476
+ "Noam Shazeer. 2020. Glu variants improve transformer. arXiv preprint arXiv:2002.05202.",
1477
+ "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, and 1 others. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.",
1478
+ "Jonathan Tow. 2023. Stablelm alpha v2 models. https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2.",
1479
+ "Jonathan Tow, Marco Bellagente, Dakota Mahan, and Carlos Riquelme. 2023. Stablelm 3b 4e1t. https://aka.ms/StableLM-3B-4E1T.",
1480
+ "A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems.",
1481
+ "Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, and 1 others. 2023. Magneto: a foundation transformer. In International Conference on Machine Learning, pages 36077-36092. PMLR.",
1482
+ "Yuxin Wu and Kaiming He. 2018. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3-19.",
1483
+ "Tianzhu Ye, Li Dong, Yuqing Xia, Yutao Sun, Yi Zhu, Gao Huang, and Furu Wei. 2024. Differential transformer. arXiv preprint arXiv:2410.05258.",
1484
+ "Biao Zhang and Rico Sennrich. 2019. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32."
1485
+ ],
1486
+ "bbox": [
1487
+ 510,
1488
+ 85,
1489
+ 882,
1490
+ 752
1491
+ ],
1492
+ "page_idx": 8
1493
+ },
1494
+ {
1495
+ "type": "page_number",
1496
+ "text": "9821",
1497
+ "bbox": [
1498
+ 480,
1499
+ 928,
1500
+ 517,
1501
+ 940
1502
+ ],
1503
+ "page_idx": 8
1504
+ }
1505
+ ]
2025/DINT Transformer/0d93eff8-c6dd-408c-9936-774d22532538_model.json ADDED
@@ -0,0 +1,1835 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "title",
5
+ "bbox": [
6
+ 0.397,
7
+ 0.087,
8
+ 0.602,
9
+ 0.105
10
+ ],
11
+ "angle": 0,
12
+ "content": "DINT Transformer"
13
+ },
14
+ {
15
+ "type": "text",
16
+ "bbox": [
17
+ 0.18,
18
+ 0.119,
19
+ 0.314,
20
+ 0.136
21
+ ],
22
+ "angle": 0,
23
+ "content": "Yueyang Cang*"
24
+ },
25
+ {
26
+ "type": "text",
27
+ "bbox": [
28
+ 0.442,
29
+ 0.119,
30
+ 0.554,
31
+ 0.136
32
+ ],
33
+ "angle": 0,
34
+ "content": "Yuhang Liu*"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.679,
40
+ 0.12,
41
+ 0.822,
42
+ 0.137
43
+ ],
44
+ "angle": 0,
45
+ "content": "Xiaoteng Zhang"
46
+ },
47
+ {
48
+ "type": "text",
49
+ "bbox": [
50
+ 0.376,
51
+ 0.137,
52
+ 0.623,
53
+ 0.153
54
+ ],
55
+ "angle": 0,
56
+ "content": "Tsinghua University / Beijing"
57
+ },
58
+ {
59
+ "type": "text",
60
+ "bbox": [
61
+ 0.262,
62
+ 0.154,
63
+ 0.737,
64
+ 0.169
65
+ ],
66
+ "angle": 0,
67
+ "content": "{cangyy23,yh-liu23,zxt21}@mails.tsinghua.edu.cn"
68
+ },
69
+ {
70
+ "type": "title",
71
+ "bbox": [
72
+ 0.285,
73
+ 0.188,
74
+ 0.379,
75
+ 0.202
76
+ ],
77
+ "angle": 0,
78
+ "content": "Erlu Zhao"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.126,
84
+ 0.205,
85
+ 0.788,
86
+ 0.221
87
+ ],
88
+ "angle": 0,
89
+ "content": "Peking University Health Science Center / BeiJing Tsinghua University / BeiJing"
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.221,
95
+ 0.222,
96
+ 0.444,
97
+ 0.237
98
+ ],
99
+ "angle": 0,
100
+ "content": "2411210080@bjmu.edu.cn"
101
+ },
102
+ {
103
+ "type": "text",
104
+ "bbox": [
105
+ 0.594,
106
+ 0.222,
107
+ 0.74,
108
+ 0.236
109
+ ],
110
+ "angle": 0,
111
+ "content": "zzushi@126.com"
112
+ },
113
+ {
114
+ "type": "title",
115
+ "bbox": [
116
+ 0.261,
117
+ 0.261,
118
+ 0.341,
119
+ 0.277
120
+ ],
121
+ "angle": 0,
122
+ "content": "Abstract"
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.142,
128
+ 0.287,
129
+ 0.461,
130
+ 0.629
131
+ ],
132
+ "angle": 0,
133
+ "content": "The DIFF Transformer mitigates interference from irrelevant contexts by introducing a differential attention mechanism, thereby enhancing focus on critical tokens. However, this architecture suffers from two major limitations: first, its use of two independent attention matrices leads to numerical instability, and second, it lacks global context modeling, which is essential for identifying globally significant tokens. To address these challenges, we propose the DINT Transformer, which extends the DIFF Transformer by incorporating an integral mechanism. By computing global importance scores and integrating them into the attention matrix, the DINT Transformer not only improves overall numerical stability but also significantly enhances its ability to capture global dependencies. Experimental results demonstrate that the DINT Transformer achieves superior accuracy and robustness across various practical applications, including long-context language modeling and key information retrieval. These advancements establish the DINT Transformer as a highly effective and promising architecture."
134
+ },
135
+ {
136
+ "type": "title",
137
+ "bbox": [
138
+ 0.115,
139
+ 0.651,
140
+ 0.26,
141
+ 0.667
142
+ ],
143
+ "angle": 0,
144
+ "content": "1 Introduction"
145
+ },
146
+ {
147
+ "type": "text",
148
+ "bbox": [
149
+ 0.113,
150
+ 0.676,
151
+ 0.49,
152
+ 0.867
153
+ ],
154
+ "angle": 0,
155
+ "content": "Transformer(Vaswani, 2017), as one of the most popular models in the field of artificial intelligence today, is widely used in natural language processing, computer vision, and other fields, especially with the application of decoder-only architectures in large language models (LLMs). Its core lies in the attention mechanism based on softmax, which assigns importance to different tokens in a sequence. However, recent research(Lu et al., 2021) has found that LLMs face the challenge of attention noise when accurately focusing on key information in the context."
156
+ },
157
+ {
158
+ "type": "text",
159
+ "bbox": [
160
+ 0.114,
161
+ 0.87,
162
+ 0.49,
163
+ 0.901
164
+ ],
165
+ "angle": 0,
166
+ "content": "To address the issue of attention noise, DIFF Transformer(Ye et al., 2024) introduces a differen"
167
+ },
168
+ {
169
+ "type": "text",
170
+ "bbox": [
171
+ 0.508,
172
+ 0.262,
173
+ 0.885,
174
+ 0.406
175
+ ],
176
+ "angle": 0,
177
+ "content": "tial attention mechanism that effectively suppresses the impact of irrelevant context by computing DIFFerence between two independent attention distributions. However, DIFF Transformer still exhibits significant limitations: The use of two independent attention matrices makes it difficult to accurately estimate weights for noisy components, resulting in numerical instability that may adversely affect downstream task performance."
178
+ },
179
+ {
180
+ "type": "text",
181
+ "bbox": [
182
+ 0.508,
183
+ 0.407,
184
+ 0.885,
185
+ 0.728
186
+ ],
187
+ "angle": 0,
188
+ "content": "Through our research, we observed that the semantic interpretation of most tokens in a sequence often depends on a few globally critical tokens. Taking sentence processing as an example, key elements such as subjects or main predicate verbs frequently serve as semantic anchors (as illustrated in Figure 1), playing a decisive role in constructing overall meaning. Building on this insight, we developed DINT Transformer by introducing an integral mechanism to extend DIFF Transformer. This integral component computes global importance scores, enabling the model to enhance its focus on critical tokens. Our proposed DINT Transformer not only further reduces attention noise by strengthening the focus on globally important tokens, but also significantly decreases the frequency of negative values in attention matrices through parametric design, thereby improving the model's overall numerical stability and substantially boosting performance."
189
+ },
190
+ {
191
+ "type": "text",
192
+ "bbox": [
193
+ 0.508,
194
+ 0.729,
195
+ 0.885,
196
+ 0.921
197
+ ],
198
+ "angle": 0,
199
+ "content": "Through comprehensive experiments on long-context language modeling and key information retrieval tasks, we rigorously validated the efficacy of DINT Transformer. The results demonstrate that DINT Transformer consistently outperforms both conventional Transformer and DIFF Transformer across all tasks. Its integral mechanism not only effectively captures global dependencies and further suppresses attention noise, but also significantly enhances model stability, successfully addressing inherent limitations of existing approaches. Moreover, while maintaining excellent scalability, DINT"
200
+ },
201
+ {
202
+ "type": "page_footnote",
203
+ "bbox": [
204
+ 0.136,
205
+ 0.907,
206
+ 0.263,
207
+ 0.921
208
+ ],
209
+ "angle": 0,
210
+ "content": "*Equal contribution."
211
+ },
212
+ {
213
+ "type": "page_number",
214
+ "bbox": [
215
+ 0.48,
216
+ 0.928,
217
+ 0.521,
218
+ 0.941
219
+ ],
220
+ "angle": 0,
221
+ "content": "9813"
222
+ },
223
+ {
224
+ "type": "footer",
225
+ "bbox": [
226
+ 0.16,
227
+ 0.946,
228
+ 0.836,
229
+ 0.973
230
+ ],
231
+ "angle": 0,
232
+ "content": "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 9813-9821 November 4-9, 2025 ©2025 Association for Computational Linguistics"
233
+ }
234
+ ],
235
+ [
236
+ {
237
+ "type": "image_caption",
238
+ "bbox": [
239
+ 0.134,
240
+ 0.09,
241
+ 0.873,
242
+ 0.123
243
+ ],
244
+ "angle": 0,
245
+ "content": "Einstein was born in Germany and was a physicist, mathematician, and philosopher."
246
+ },
247
+ {
248
+ "type": "image",
249
+ "bbox": [
250
+ 0.129,
251
+ 0.124,
252
+ 0.855,
253
+ 0.309
254
+ ],
255
+ "angle": 0,
256
+ "content": null
257
+ },
258
+ {
259
+ "type": "image_caption",
260
+ "bbox": [
261
+ 0.112,
262
+ 0.322,
263
+ 0.884,
264
+ 0.381
265
+ ],
266
+ "angle": 0,
267
+ "content": "Figure 1: The DIFF Transformer's use of two independent attention matrices results in a significant proportion of negative values in its final attention scores. In contrast, the DINT Transformer substantially reduces the occurrence of negative values, enhances numerical stability, and more effectively strengthens attention to globally important tokens—such as precisely focusing on key entities like \"Newton\" within sentences."
268
+ },
269
+ {
270
+ "type": "text",
271
+ "bbox": [
272
+ 0.113,
273
+ 0.405,
274
+ 0.49,
275
+ 0.502
276
+ ],
277
+ "angle": 0,
278
+ "content": "Transformer delivers substantial performance improvements in downstream tasks such as key information retrieval. These significant findings establish DINT Transformer as a robust foundational architecture for future advancements in sequence modeling and large language models."
279
+ },
280
+ {
281
+ "type": "title",
282
+ "bbox": [
283
+ 0.114,
284
+ 0.512,
285
+ 0.317,
286
+ 0.527
287
+ ],
288
+ "angle": 0,
289
+ "content": "2 DINT Transformer"
290
+ },
291
+ {
292
+ "type": "text",
293
+ "bbox": [
294
+ 0.113,
295
+ 0.537,
296
+ 0.49,
297
+ 0.827
298
+ ],
299
+ "angle": 0,
300
+ "content": "DINT Transformer is designed as a robust architecture for sequence modeling, particularly for large language models (LLMs). The model consists of \\(L\\) stacked layers, where each layer applies a DINT attention module followed by a feedforward network. Starting from token embeddings \\(X_0 \\in \\mathbb{R}^{N \\times d_{\\mathrm{model}}}\\), the input is progressively transformed through \\(L\\) layers to produce the final output \\(X_L\\). The key innovation lies in the addition of an integral mechanism within the attention module, which enables effective modeling of global dependencies while preserving numerical stability. The overall structure aligns with common practices, incorporating pre-RMSNorm(Zhang and Sennrich, 2019) and SwiGLU(Ramachandran et al., 2017; Shazeer, 2020) for enhanced performance following LLaMA(Touvron et al., 2023). A diagram of the model architecture is shown in Figure 2."
301
+ },
302
+ {
303
+ "type": "title",
304
+ "bbox": [
305
+ 0.114,
306
+ 0.837,
307
+ 0.285,
308
+ 0.851
309
+ ],
310
+ "angle": 0,
311
+ "content": "2.1 DIFF Attention"
312
+ },
313
+ {
314
+ "type": "text",
315
+ "bbox": [
316
+ 0.113,
317
+ 0.857,
318
+ 0.49,
319
+ 0.922
320
+ ],
321
+ "angle": 0,
322
+ "content": "DIFF attention introduces a differential attention mechanism that reduces attention noise by leveraging the difference between two attention distributions. Specifically, given the input \\(X \\in \\mathbb{R}^{N \\times d_{\\mathrm{model}}}\\),"
323
+ },
324
+ {
325
+ "type": "text",
326
+ "bbox": [
327
+ 0.509,
328
+ 0.405,
329
+ 0.864,
330
+ 0.421
331
+ ],
332
+ "angle": 0,
333
+ "content": "it is projected to query, key, and value matrices:"
334
+ },
335
+ {
336
+ "type": "equation",
337
+ "bbox": [
338
+ 0.517,
339
+ 0.431,
340
+ 0.884,
341
+ 0.457
342
+ ],
343
+ "angle": 0,
344
+ "content": "\\[\n[ Q _ {1}; Q _ {2} ] = X W _ {Q}, \\quad [ K _ {1}; K _ {2} ] = X W _ {K}, \\quad V = X W _ {V}, \\tag {1}\n\\]"
345
+ },
346
+ {
347
+ "type": "text",
348
+ "bbox": [
349
+ 0.508,
350
+ 0.457,
351
+ 0.885,
352
+ 0.523
353
+ ],
354
+ "angle": 0,
355
+ "content": "where \\(Q_{1}, Q_{2}, K_{1}, K_{2} \\in \\mathbb{R}^{N \\times d}\\) and \\(V \\in \\mathbb{R}^{N \\times 2d}\\) are the projected matrices, and \\(W_{Q}, W_{K}, W_{V} \\in \\mathbb{R}^{d_{\\mathrm{model}} \\times 2d}\\) are learnable parameters. The differential attention operator computes the output as:"
356
+ },
357
+ {
358
+ "type": "equation",
359
+ "bbox": [
360
+ 0.527,
361
+ 0.533,
362
+ 0.884,
363
+ 0.571
364
+ ],
365
+ "angle": 0,
366
+ "content": "\\[\n\\operatorname {D i f f A t t n} (X) = \\left(\\operatorname {s o f t m a x} \\left(\\frac {Q _ {1} K _ {1} ^ {\\top}}{\\sqrt {d}}\\right) - \\lambda \\cdot \\operatorname {s o f t m a x} \\left(\\frac {Q _ {2} K _ {2} ^ {\\top}}{\\sqrt {d}}\\right)\\right) V \\tag {2}\n\\]"
367
+ },
368
+ {
369
+ "type": "text",
370
+ "bbox": [
371
+ 0.508,
372
+ 0.572,
373
+ 0.884,
374
+ 0.685
375
+ ],
376
+ "angle": 0,
377
+ "content": "where \\(\\lambda\\) is a learnable scalar parameter. This differential mechanism effectively suppresses irrelevant context, enhancing the robustness of the attention scores by canceling common-mode noise, analogous to the operation of differential amplifiers in electrical engineering. To synchronize learning dynamics, \\(\\lambda\\) is re-parameterized as:"
378
+ },
379
+ {
380
+ "type": "equation",
381
+ "bbox": [
382
+ 0.518,
383
+ 0.698,
384
+ 0.883,
385
+ 0.715
386
+ ],
387
+ "angle": 0,
388
+ "content": "\\[\n\\lambda = \\exp \\left(\\lambda_ {q 1} \\cdot \\lambda_ {k 1}\\right) - \\exp \\left(\\lambda_ {q 2} \\cdot \\lambda_ {k 2}\\right) + \\lambda_ {\\text {i n i t}}, \\tag {3}\n\\]"
389
+ },
390
+ {
391
+ "type": "text",
392
+ "bbox": [
393
+ 0.508,
394
+ 0.728,
395
+ 0.884,
396
+ 0.825
397
+ ],
398
+ "angle": 0,
399
+ "content": "where \\(\\lambda_{q1},\\lambda_{k1},\\lambda_{q2},\\lambda_{k2}\\in \\mathbb{R}^d\\) are learnable vectors, and \\(\\lambda_{\\mathrm{init}}\\in (0,1)\\) is a constant used for initialization. Empirical results show that setting \\(\\lambda_{\\mathrm{init}} = 0.8 - 0.6\\times \\exp (-0.3\\cdot (l - 1))\\) , where \\(l\\in [1,L]\\) represents the layer index, works well in practice."
400
+ },
401
+ {
402
+ "type": "title",
403
+ "bbox": [
404
+ 0.509,
405
+ 0.836,
406
+ 0.682,
407
+ 0.85
408
+ ],
409
+ "angle": 0,
410
+ "content": "2.2 DINT Attention"
411
+ },
412
+ {
413
+ "type": "text",
414
+ "bbox": [
415
+ 0.508,
416
+ 0.857,
417
+ 0.884,
418
+ 0.922
419
+ ],
420
+ "angle": 0,
421
+ "content": "DINT attention extends DIFF attention by introducing an integral mechanism, enhancing the model's ability to capture globally important information while maintaining numerical stability through row"
422
+ },
423
+ {
424
+ "type": "page_number",
425
+ "bbox": [
426
+ 0.481,
427
+ 0.928,
428
+ 0.521,
429
+ 0.941
430
+ ],
431
+ "angle": 0,
432
+ "content": "9814"
433
+ }
434
+ ],
435
+ [
436
+ {
437
+ "type": "image",
438
+ "bbox": [
439
+ 0.16,
440
+ 0.086,
441
+ 0.51,
442
+ 0.354
443
+ ],
444
+ "angle": 0,
445
+ "content": null
446
+ },
447
+ {
448
+ "type": "image",
449
+ "bbox": [
450
+ 0.51,
451
+ 0.082,
452
+ 0.847,
453
+ 0.351
454
+ ],
455
+ "angle": 0,
456
+ "content": null
457
+ },
458
+ {
459
+ "type": "image_caption",
460
+ "bbox": [
461
+ 0.113,
462
+ 0.363,
463
+ 0.883,
464
+ 0.392
465
+ ],
466
+ "angle": 0,
467
+ "content": "Figure 2: Multi-head DINT Attention. DIFF Attention matrix implements reducing attention noise, while the Integration Attention matrix enhances global attention."
468
+ },
469
+ {
470
+ "type": "image",
471
+ "bbox": [
472
+ 0.115,
473
+ 0.404,
474
+ 0.466,
475
+ 0.6
476
+ ],
477
+ "angle": 0,
478
+ "content": null
479
+ },
480
+ {
481
+ "type": "image_caption",
482
+ "bbox": [
483
+ 0.116,
484
+ 0.608,
485
+ 0.461,
486
+ 0.623
487
+ ],
488
+ "angle": 0,
489
+ "content": "(a) Scaling model size ranging from 830M to 13B."
490
+ },
491
+ {
492
+ "type": "image",
493
+ "bbox": [
494
+ 0.534,
495
+ 0.404,
496
+ 0.885,
497
+ 0.6
498
+ ],
499
+ "angle": 0,
500
+ "content": null
501
+ },
502
+ {
503
+ "type": "image_caption",
504
+ "bbox": [
505
+ 0.534,
506
+ 0.608,
507
+ 0.881,
508
+ 0.623
509
+ ],
510
+ "angle": 0,
511
+ "content": "(b) Scaling number of training tokens for 3B models."
512
+ },
513
+ {
514
+ "type": "image_caption",
515
+ "bbox": [
516
+ 0.113,
517
+ 0.634,
518
+ 0.884,
519
+ 0.692
520
+ ],
521
+ "angle": 0,
522
+ "content": "Figure 3: Language modeling loss of scaling up parameter count and training tokens. DINT Transformer outperforms other models, demonstrating that it requires fewer parameters or tokens to achieve comparable performance. (a) DINT Transformer matches the performance of larger models with fewer parameters. (b) DINT Transformer achieves comparable performance using significantly fewer training tokens."
523
+ },
524
+ {
525
+ "type": "text",
526
+ "bbox": [
527
+ 0.113,
528
+ 0.717,
529
+ 0.489,
530
+ 0.765
531
+ ],
532
+ "angle": 0,
533
+ "content": "normalization in the final attention matrix. The signal attention matrix \\(A_{1}\\) is computed using \\(Q_{1}\\) and \\(K_{1}\\):"
534
+ },
535
+ {
536
+ "type": "equation",
537
+ "bbox": [
538
+ 0.199,
539
+ 0.783,
540
+ 0.488,
541
+ 0.82
542
+ ],
543
+ "angle": 0,
544
+ "content": "\\[\nA _ {1} = \\operatorname {s o f t m a x} \\left(\\frac {Q _ {1} K _ {1} ^ {\\top}}{\\sqrt {d}}\\right). \\tag {4}\n\\]"
545
+ },
546
+ {
547
+ "type": "text",
548
+ "bbox": [
549
+ 0.113,
550
+ 0.842,
551
+ 0.49,
552
+ 0.922
553
+ ],
554
+ "angle": 0,
555
+ "content": "The integral component computes global importance scores by column-wise averaging of the signal attention weights. Crucially, to prevent information leakage, the averaging operation only considers tokens preceding the current token in the"
556
+ },
557
+ {
558
+ "type": "text",
559
+ "bbox": [
560
+ 0.51,
561
+ 0.718,
562
+ 0.587,
563
+ 0.732
564
+ ],
565
+ "angle": 0,
566
+ "content": "sequence."
567
+ },
568
+ {
569
+ "type": "equation",
570
+ "bbox": [
571
+ 0.597,
572
+ 0.738,
573
+ 0.883,
574
+ 0.779
575
+ ],
576
+ "angle": 0,
577
+ "content": "\\[\nG [ n,: ] = \\frac {1}{n} \\sum_ {m = 1} ^ {n} A _ {1} [ m,: ], \\tag {5}\n\\]"
578
+ },
579
+ {
580
+ "type": "text",
581
+ "bbox": [
582
+ 0.509,
583
+ 0.785,
584
+ 0.655,
585
+ 0.8
586
+ ],
587
+ "angle": 0,
588
+ "content": "where \\(G\\in \\mathbb{R}^{N\\times N}\\)"
589
+ },
590
+ {
591
+ "type": "text",
592
+ "bbox": [
593
+ 0.525,
594
+ 0.802,
595
+ 0.884,
596
+ 0.818
597
+ ],
598
+ "angle": 0,
599
+ "content": "DINT attention operator computes the output as:"
600
+ },
601
+ {
602
+ "type": "equation",
603
+ "bbox": [
604
+ 0.52,
605
+ 0.824,
606
+ 0.883,
607
+ 0.856
608
+ ],
609
+ "angle": 0,
610
+ "content": "\\[\n\\operatorname {D I N T A t t n} (X) = \\left(A _ {\\text {d i f f}} + \\gamma \\cdot \\operatorname {s o f t m a x} (G)\\right) V, \\tag {6}\n\\]"
611
+ },
612
+ {
613
+ "type": "text",
614
+ "bbox": [
615
+ 0.508,
616
+ 0.858,
617
+ 0.884,
618
+ 0.889
619
+ ],
620
+ "angle": 0,
621
+ "content": "where \\(\\gamma\\) is a learnable scalar following DIFF Transformer, \\(A_{\\mathrm{diff}}\\) is DIFF attention component."
622
+ },
623
+ {
624
+ "type": "text",
625
+ "bbox": [
626
+ 0.509,
627
+ 0.89,
628
+ 0.882,
629
+ 0.92
630
+ ],
631
+ "angle": 0,
632
+ "content": "Unified Parameter Setting. By setting \\(\\lambda\\) and \\(\\gamma\\) to the same value, we ensure that the final attention"
633
+ },
634
+ {
635
+ "type": "page_number",
636
+ "bbox": [
637
+ 0.481,
638
+ 0.928,
639
+ 0.52,
640
+ 0.941
641
+ ],
642
+ "angle": 0,
643
+ "content": "9815"
644
+ }
645
+ ],
646
+ [
647
+ {
648
+ "type": "text",
649
+ "bbox": [
650
+ 0.113,
651
+ 0.085,
652
+ 0.488,
653
+ 0.182
654
+ ],
655
+ "angle": 0,
656
+ "content": "matrix \\( A_{\\mathrm{final}} \\) has rows that sum to 1. This row normalization guarantees numerical stability and consistency across the model, thusintaining data stability throughout the layers. This unified setting follows the parameterization method used in DIFF Transformer, further enhancing training stability."
657
+ },
658
+ {
659
+ "type": "title",
660
+ "bbox": [
661
+ 0.114,
662
+ 0.192,
663
+ 0.427,
664
+ 0.206
665
+ ],
666
+ "angle": 0,
667
+ "content": "2.3 Multi-Head Differential Attention"
668
+ },
669
+ {
670
+ "type": "text",
671
+ "bbox": [
672
+ 0.113,
673
+ 0.212,
674
+ 0.487,
675
+ 0.326
676
+ ],
677
+ "angle": 0,
678
+ "content": "We also use the multi-head mechanism in DINT Transformer. Let \\( h \\) denote the number of attention heads. We use different projection matrices \\( W_{Q}^{i}, W_{K}^{i}, W_{V}^{i}, i \\in [1, h] \\) for the heads. The scalar \\( \\lambda \\) is shared between heads within the same layer. Then the head outputs are normalized and projected to the final results as follows:"
679
+ },
680
+ {
681
+ "type": "equation",
682
+ "bbox": [
683
+ 0.141,
684
+ 0.339,
685
+ 0.488,
686
+ 0.358
687
+ ],
688
+ "angle": 0,
689
+ "content": "\\[\n\\operatorname {h e a d} _ {i} = \\operatorname {D i f f A t t n} \\left(X; W _ {Q} ^ {i}, W _ {K} ^ {i}, W _ {V} ^ {i}, \\lambda\\right) \\tag {7}\n\\]"
690
+ },
691
+ {
692
+ "type": "equation",
693
+ "bbox": [
694
+ 0.225,
695
+ 0.37,
696
+ 0.487,
697
+ 0.387
698
+ ],
699
+ "angle": 0,
700
+ "content": "\\[\n\\overline {{\\operatorname {h e a d} _ {i}}} = \\operatorname {L N} \\left(\\operatorname {h e a d} _ {i}\\right) \\tag {8}\n\\]"
701
+ },
702
+ {
703
+ "type": "text",
704
+ "bbox": [
705
+ 0.116,
706
+ 0.394,
707
+ 0.487,
708
+ 0.426
709
+ ],
710
+ "angle": 0,
711
+ "content": "\\(\\operatorname{MultiHead}(X) = \\operatorname{Concat}(\\overline{\\operatorname{head}}_1, \\dots, \\overline{\\operatorname{head}}_h) W_O\\) (9)"
712
+ },
713
+ {
714
+ "type": "text",
715
+ "bbox": [
716
+ 0.113,
717
+ 0.427,
718
+ 0.489,
719
+ 0.619
720
+ ],
721
+ "angle": 0,
722
+ "content": "where \\( W_O \\in \\mathbb{R}^{d_{\\mathrm{model}} \\times d_{\\mathrm{model}}} \\) is a learnable projection matrix, \\( \\mathrm{LN}(\\cdot) \\) uses RMSNorm for each head, and \\( \\mathrm{Concat}(\\cdot) \\) concatenates the heads together along the channel dimension. Unlike DIFF Transformer, we do not apply an additional multiplier to the outputs of each head, as the unified parameter setting in DINT Transformer already ensures numerical stability and consistency. The number of heads is set as \\( h = d_{\\mathrm{model}} / 2d \\), where \\( d \\) is the head dimension of the Transformer, to ensure that the parameter count and computational complexity are aligned."
723
+ },
724
+ {
725
+ "type": "text",
726
+ "bbox": [
727
+ 0.113,
728
+ 0.62,
729
+ 0.49,
730
+ 0.797
731
+ ],
732
+ "angle": 0,
733
+ "content": "Headwise Normalization. Figure 2 illustrates the use of GroupNorm(Wu and He, 2018) within the attention mechanism to stabilize training. Although Layer Normalization (LN) is applied independently to each attention head, the sparse nature of differential attention often leads to varied statistical patterns across heads. By normalizing each head individually before the concatenation step, LN ensures more consistent gradient statistics, which contributes to improved training stability(Qin et al., 2022; Wang et al., 2023)."
734
+ },
735
+ {
736
+ "type": "title",
737
+ "bbox": [
738
+ 0.114,
739
+ 0.808,
740
+ 0.325,
741
+ 0.822
742
+ ],
743
+ "angle": 0,
744
+ "content": "2.4 Overall Architecture"
745
+ },
746
+ {
747
+ "type": "text",
748
+ "bbox": [
749
+ 0.113,
750
+ 0.828,
751
+ 0.487,
752
+ 0.892
753
+ ],
754
+ "angle": 0,
755
+ "content": "The overall architecture stacks \\(L\\) layers, where each layer contains a multihead differential attention module and a feedforward network module. We describe DINT Transformer layer as:"
756
+ },
757
+ {
758
+ "type": "equation",
759
+ "bbox": [
760
+ 0.176,
761
+ 0.904,
762
+ 0.487,
763
+ 0.922
764
+ ],
765
+ "angle": 0,
766
+ "content": "\\[\nY ^ {l} = \\operatorname {M u l t i H e a d} \\left(\\mathrm {L N} \\left(X ^ {l}\\right)\\right) + X ^ {l} \\tag {10}\n\\]"
767
+ },
768
+ {
769
+ "type": "equation",
770
+ "bbox": [
771
+ 0.57,
772
+ 0.083,
773
+ 0.883,
774
+ 0.101
775
+ ],
776
+ "angle": 0,
777
+ "content": "\\[\nX ^ {l + 1} = \\operatorname {S w i G L U} \\left(\\operatorname {L N} \\left(Y ^ {l}\\right)\\right) + Y ^ {l} \\tag {11}\n\\]"
778
+ },
779
+ {
780
+ "type": "text",
781
+ "bbox": [
782
+ 0.508,
783
+ 0.109,
784
+ 0.882,
785
+ 0.14
786
+ ],
787
+ "angle": 0,
788
+ "content": "where \\(\\mathrm{LN}(\\cdot)\\) is RMSNorm, and \\(\\operatorname{SwiGLU}(X)\\) is defined as:"
789
+ },
790
+ {
791
+ "type": "equation",
792
+ "bbox": [
793
+ 0.526,
794
+ 0.154,
795
+ 0.864,
796
+ 0.171
797
+ ],
798
+ "angle": 0,
799
+ "content": "\\[\n\\operatorname {S w i G L U} (X) = \\left(\\operatorname {s w i s h} \\left(X W _ {G}\\right) \\odot X W _ {1}\\right) W _ {2},\n\\]"
800
+ },
801
+ {
802
+ "type": "text",
803
+ "bbox": [
804
+ 0.509,
805
+ 0.184,
806
+ 0.881,
807
+ 0.217
808
+ ],
809
+ "angle": 0,
810
+ "content": "where \\(W_{G}, W_{1} \\in \\mathbb{R}^{d_{\\mathrm{model}} \\times \\frac{8}{3} d_{\\mathrm{model}}}\\), and \\(W_{2} \\in \\mathbb{R}^{\\frac{8}{3} d_{\\mathrm{model}} \\times d_{\\mathrm{model}}}\\) are learnable matrices."
811
+ },
812
+ {
813
+ "type": "title",
814
+ "bbox": [
815
+ 0.509,
816
+ 0.231,
817
+ 0.656,
818
+ 0.246
819
+ ],
820
+ "angle": 0,
821
+ "content": "3 Experiments"
822
+ },
823
+ {
824
+ "type": "text",
825
+ "bbox": [
826
+ 0.507,
827
+ 0.256,
828
+ 0.885,
829
+ 0.514
830
+ ],
831
+ "angle": 0,
832
+ "content": "In this study, we evaluate DINT Transformer through a series of experiments, comparing it with DIFF Transformer and other baseline models. Since DINT Transformer does not introduce new learnable parameters, only increasing computational complexity, its parameter count remains unchanged. Therefore, the model configurations used in the comparison were chosen to be the same as those of DIFF Transformer. Our experiments show that by enhancing attention to globally significant tokens, DINT Transformer effectively reduces attention noise. Additionally, DINT Transformer exhibits stronger stability compared to DIFF Transformer, leading to improved performance across tasks such as long-sequence modeling, key information retrieval, and in-context learning."
833
+ },
834
+ {
835
+ "type": "title",
836
+ "bbox": [
837
+ 0.509,
838
+ 0.525,
839
+ 0.804,
840
+ 0.54
841
+ ],
842
+ "angle": 0,
843
+ "content": "3.1 Language Modeling Evaluation"
844
+ },
845
+ {
846
+ "type": "text",
847
+ "bbox": [
848
+ 0.508,
849
+ 0.546,
850
+ 0.882,
851
+ 0.609
852
+ ],
853
+ "angle": 0,
854
+ "content": "We trained a 3B DINT Transformer language model using the same configuration settings as the 3B DIFF Transformer language model. The model settings are shown in Table 1."
855
+ },
856
+ {
857
+ "type": "table",
858
+ "bbox": [
859
+ 0.581,
860
+ 0.62,
861
+ 0.813,
862
+ 0.819
863
+ ],
864
+ "angle": 0,
865
+ "content": "<table><tr><td>Params</td><td>Values</td></tr><tr><td>Layers</td><td>28</td></tr><tr><td>Hidden size</td><td>3072</td></tr><tr><td>FFN size</td><td>8192</td></tr><tr><td>Vocab size</td><td>100,288</td></tr><tr><td>Heads</td><td>12</td></tr><tr><td>Adam β</td><td>(0.9, 0.95)</td></tr><tr><td>LR</td><td>3.2 × 10-4</td></tr><tr><td>Batch size</td><td>4M</td></tr><tr><td>Warmup steps</td><td>1000</td></tr><tr><td>Weight decay</td><td>0.1</td></tr><tr><td>Dropout</td><td>0.0</td></tr></table>"
866
+ },
867
+ {
868
+ "type": "table_caption",
869
+ "bbox": [
870
+ 0.508,
871
+ 0.828,
872
+ 0.881,
873
+ 0.856
874
+ ],
875
+ "angle": 0,
876
+ "content": "Table 1: Configuration settings used for the 3B-size DINT Transformer and DIFF Transformer models."
877
+ },
878
+ {
879
+ "type": "text",
880
+ "bbox": [
881
+ 0.509,
882
+ 0.874,
883
+ 0.884,
884
+ 0.922
885
+ ],
886
+ "angle": 0,
887
+ "content": "Results. Table 2 presents the zero-shot evaluation results on the LM Eval Harness benchmark (Gao et al., 2023). We compare DINT Transformer"
888
+ },
889
+ {
890
+ "type": "page_number",
891
+ "bbox": [
892
+ 0.481,
893
+ 0.928,
894
+ 0.521,
895
+ 0.941
896
+ ],
897
+ "angle": 0,
898
+ "content": "9816"
899
+ }
900
+ ],
901
+ [
902
+ {
903
+ "type": "text",
904
+ "bbox": [
905
+ 0.113,
906
+ 0.085,
907
+ 0.49,
908
+ 0.231
909
+ ],
910
+ "angle": 0,
911
+ "content": "with other state-of-the-art Transformer-based models, including OpenLLaMA-v2-3B (Geng and Liu, 2023), StableLM-base-alpha-3B-v2 (Tow, 2023), and StableLM-3B-4E1T (Tow et al., 2023). All models were trained on 1T tokens under identical configurations to ensure fair comparison. The results demonstrate that DINT Transformer not only outperforms these baselines across all downstream tasks but also exhibits superior stability."
912
+ },
913
+ {
914
+ "type": "title",
915
+ "bbox": [
916
+ 0.114,
917
+ 0.241,
918
+ 0.476,
919
+ 0.258
920
+ ],
921
+ "angle": 0,
922
+ "content": "3.2 Scalability Compared with Transformer"
923
+ },
924
+ {
925
+ "type": "text",
926
+ "bbox": [
927
+ 0.113,
928
+ 0.262,
929
+ 0.489,
930
+ 0.389
931
+ ],
932
+ "angle": 0,
933
+ "content": "We evaluated the scalability of DINT Transformer compared to the standard Transformer and DIFF Transformer, specifically focusing on language modeling tasks. This evaluation involved scaling both model size and the number of training tokens. We adopted an enhanced Transformer architecture similar to LLaMA, ensuring a fair comparison by using identical experimental setups."
934
+ },
935
+ {
936
+ "type": "text",
937
+ "bbox": [
938
+ 0.113,
939
+ 0.391,
940
+ 0.49,
941
+ 0.567
942
+ ],
943
+ "angle": 0,
944
+ "content": "Scaling Model Size As shown in Figure 3(a), DINT Transformer consistently outperformed both Transformer and DIFF Transformer across various model sizes (see Table 3 for model configurations). Specifically, DINT Transformer achieved comparable validation loss to the Transformer with \\(44\\%\\) fewer parameters and matched the performance of DIFF Transformer with \\(29\\%\\) fewer parameters. This demonstrates the superior efficiency and scalability of DINT Transformer in terms of parameter usage."
945
+ },
946
+ {
947
+ "type": "text",
948
+ "bbox": [
949
+ 0.113,
950
+ 0.569,
951
+ 0.49,
952
+ 0.729
953
+ ],
954
+ "angle": 0,
955
+ "content": "Scaling Training Tokens Figure 3(b) shows the results of scaling the number of training tokens. The fitted curves indicate that DINT Transformer achieved comparable performance to the Transformer with \\(33\\%\\) fewer training tokens. Additionally, DINT Transformer outperformed DIFF Transformer with \\(16\\%\\) fewer training tokens. These results highlight the significant data efficiency of DINT Transformer, achieving equivalent or superior results with considerably fewer resources."
956
+ },
957
+ {
958
+ "type": "title",
959
+ "bbox": [
960
+ 0.114,
961
+ 0.74,
962
+ 0.37,
963
+ 0.755
964
+ ],
965
+ "angle": 0,
966
+ "content": "3.3 Key Information Retrieval"
967
+ },
968
+ {
969
+ "type": "text",
970
+ "bbox": [
971
+ 0.113,
972
+ 0.761,
973
+ 0.489,
974
+ 0.873
975
+ ],
976
+ "angle": 0,
977
+ "content": "The Needle-In-A-Haystack test (Kamradt, 2023) is used to evaluate the ability of models to extract key information from long contexts. Following the protocol of LWM (Liu et al., 2024) and Gemini 1.5 (Reid et al., 2024), \"needles\" are short sentences that assign a unique number to a city. The objective is to retrieve these numbers based on a given query."
978
+ },
979
+ {
980
+ "type": "text",
981
+ "bbox": [
982
+ 0.113,
983
+ 0.875,
984
+ 0.489,
985
+ 0.922
986
+ ],
987
+ "angle": 0,
988
+ "content": "We position the answer needle at different depths within the context (0%, 25%, 50%, 75%, 100%), while other needles are placed randomly. Each"
989
+ },
990
+ {
991
+ "type": "text",
992
+ "bbox": [
993
+ 0.508,
994
+ 0.085,
995
+ 0.885,
996
+ 0.133
997
+ ],
998
+ "angle": 0,
999
+ "content": "combination of depth and context length is evaluated over 50 samples, and the average accuracy is reported."
1000
+ },
1001
+ {
1002
+ "type": "text",
1003
+ "bbox": [
1004
+ 0.508,
1005
+ 0.133,
1006
+ 0.885,
1007
+ 0.31
1008
+ ],
1009
+ "angle": 0,
1010
+ "content": "Retrieve from 4K Context Length We evaluated the multi-needle retrieval task using 4K-length contexts, inserting \\( N = 1,2,4,6 \\) needles and retrieving \\( R = 1,2 \\) needles. The models used for evaluation were trained with an input length of 4K. As shown in Table 4, DINT Transformer consistently outperforms the other models. Particularly at \\( N = 6,R = 2 \\), DINT achieves an accuracy of 0.88, significantly better than Transformer and DIFF models, indicating its superior ability to retrieve key information amidst distracting contexts."
1011
+ },
1012
+ {
1013
+ "type": "text",
1014
+ "bbox": [
1015
+ 0.508,
1016
+ 0.311,
1017
+ 0.885,
1018
+ 0.552
1019
+ ],
1020
+ "angle": 0,
1021
+ "content": "Retrieve from 64K Context Length As shown in Figure 4, the context lengths evaluated range from 8K to 64K, with the configuration set to \\( N = 8 \\), \\( R = 1 \\). We evaluated the 3B-scale model with extended context (as described in Section 3.3). The accuracy is reported across different answer needle depths (y-axis) and context lengths (x-axis). The bottom row shows the average accuracy across all depths. From the figure, it can be observed that DINT Transformer consistently performs well across varying context lengths and needle depths. Notably, at a 40K context length and \\( 25\\% \\) needle depth, DINT Transformer shows a \\( 52\\% \\) improvement in accuracy compared to Transformer and a \\( 12\\% \\) improvement compared to DIFF Transformer."
1022
+ },
1023
+ {
1024
+ "type": "text",
1025
+ "bbox": [
1026
+ 0.508,
1027
+ 0.553,
1028
+ 0.884,
1029
+ 0.728
1030
+ ],
1031
+ "angle": 0,
1032
+ "content": "Attention Score Analysis Table 5 presents the attention scores assigned to the answer span and the noise context in the key information retrieval task. These scores reflect the model's ability to focus on relevant information while ignoring irrelevant noise. We compare the normalized attention scores for different depths (i.e., positions) of the target answer within the context. The results show that DINT Transformer allocates significantly higher attention to the correct answer span and exhibits a substantial reduction in attention noise."
1033
+ },
1034
+ {
1035
+ "type": "title",
1036
+ "bbox": [
1037
+ 0.509,
1038
+ 0.74,
1039
+ 0.72,
1040
+ 0.756
1041
+ ],
1042
+ "angle": 0,
1043
+ "content": "3.4 In-Context Learning"
1044
+ },
1045
+ {
1046
+ "type": "text",
1047
+ "bbox": [
1048
+ 0.508,
1049
+ 0.761,
1050
+ 0.884,
1051
+ 0.873
1052
+ ],
1053
+ "angle": 0,
1054
+ "content": "We investigate in-context learning from two main angles: the performance on many-shot classification tasks and the model's ability to maintain robustness when utilizing context. In-context learning is an essential trait of language models, reflecting their capability to make effective use of the provided input context."
1055
+ },
1056
+ {
1057
+ "type": "text",
1058
+ "bbox": [
1059
+ 0.508,
1060
+ 0.874,
1061
+ 0.884,
1062
+ 0.921
1063
+ ],
1064
+ "angle": 0,
1065
+ "content": "Many-Shot In-Context Learning As presented in Figure 5, we compare the accuracy of many-shot classification between DIFF Transformer and"
1066
+ },
1067
+ {
1068
+ "type": "page_number",
1069
+ "bbox": [
1070
+ 0.481,
1071
+ 0.928,
1072
+ 0.521,
1073
+ 0.941
1074
+ ],
1075
+ "angle": 0,
1076
+ "content": "9817"
1077
+ }
1078
+ ],
1079
+ [
1080
+ {
1081
+ "type": "table",
1082
+ "bbox": [
1083
+ 0.117,
1084
+ 0.082,
1085
+ 0.886,
1086
+ 0.165
1087
+ ],
1088
+ "angle": 0,
1089
+ "content": "<table><tr><td>Model</td><td>ARC-C</td><td>ARC-E</td><td>BoolQ</td><td>HellaSwag</td><td>OBQA</td><td>PIQA</td><td>WinoGrande</td><td>Avg</td></tr><tr><td>OpenLLaMA-3B-v2</td><td>33.9</td><td>67.6</td><td>65.7</td><td>70.0</td><td>26.6</td><td>76.7</td><td>62.9</td><td>57.5</td></tr><tr><td>StableLM-base-alpha-3B-v2</td><td>32.4</td><td>67.3</td><td>64.6</td><td>68.6</td><td>27.1</td><td>76.0</td><td>63.0</td><td>57.0</td></tr><tr><td>StableLM-3B-4E1T</td><td>-</td><td>66.6</td><td>-</td><td>-</td><td>-</td><td>76.8</td><td>63.2</td><td>-</td></tr><tr><td>DIFF-3B</td><td>36.9 ± 2.1</td><td>72.6 ± 1.7</td><td>69.2 ± 1.8</td><td>71.1 ± 2.4</td><td>29.1 ± 0.8</td><td>76.5 ± 1.0</td><td>69.2 ± 2.0</td><td>60.6</td></tr><tr><td>DINT-3B</td><td>39.2 ± 1.7</td><td>74.3 ± 1.3</td><td>70.7 ± 1.2</td><td>72.6 ± 1.7</td><td>30.3 ± 0.5</td><td>77.3 ± 0.6</td><td>72.0 ± 1.2</td><td>62.3</td></tr></table>"
1090
+ },
1091
+ {
1092
+ "type": "table_caption",
1093
+ "bbox": [
1094
+ 0.113,
1095
+ 0.172,
1096
+ 0.884,
1097
+ 0.203
1098
+ ],
1099
+ "angle": 0,
1100
+ "content": "Table 2: Eval Harness accuracy compared with well-trained Transformer language models. The results indicate the superior performance of DINT Transformer over other models across a range of tasks."
1101
+ },
1102
+ {
1103
+ "type": "image",
1104
+ "bbox": [
1105
+ 0.151,
1106
+ 0.212,
1107
+ 0.85,
1108
+ 0.372
1109
+ ],
1110
+ "angle": 0,
1111
+ "content": null
1112
+ },
1113
+ {
1114
+ "type": "image_caption",
1115
+ "bbox": [
1116
+ 0.315,
1117
+ 0.379,
1118
+ 0.68,
1119
+ 0.395
1120
+ ],
1121
+ "angle": 0,
1122
+ "content": "Figure 4: Multi-needle retrieval results in 64K length."
1123
+ },
1124
+ {
1125
+ "type": "image",
1126
+ "bbox": [
1127
+ 0.175,
1128
+ 0.434,
1129
+ 0.49,
1130
+ 0.609
1131
+ ],
1132
+ "angle": 0,
1133
+ "content": null
1134
+ },
1135
+ {
1136
+ "type": "image_caption",
1137
+ "bbox": [
1138
+ 0.245,
1139
+ 0.613,
1140
+ 0.42,
1141
+ 0.626
1142
+ ],
1143
+ "angle": 0,
1144
+ "content": "(a) TREC dataset (6 classes)"
1145
+ },
1146
+ {
1147
+ "type": "image",
1148
+ "bbox": [
1149
+ 0.506,
1150
+ 0.433,
1151
+ 0.822,
1152
+ 0.61
1153
+ ],
1154
+ "angle": 0,
1155
+ "content": null
1156
+ },
1157
+ {
1158
+ "type": "image_caption",
1159
+ "bbox": [
1160
+ 0.561,
1161
+ 0.613,
1162
+ 0.771,
1163
+ 0.626
1164
+ ],
1165
+ "angle": 0,
1166
+ "content": "(b) TREC-fine dataset (50 classes)"
1167
+ },
1168
+ {
1169
+ "type": "image",
1170
+ "bbox": [
1171
+ 0.177,
1172
+ 0.64,
1173
+ 0.49,
1174
+ 0.818
1175
+ ],
1176
+ "angle": 0,
1177
+ "content": null
1178
+ },
1179
+ {
1180
+ "type": "image_caption",
1181
+ "bbox": [
1182
+ 0.225,
1183
+ 0.824,
1184
+ 0.44,
1185
+ 0.837
1186
+ ],
1187
+ "angle": 0,
1188
+ "content": "(c) Banking-77 dataset (77 classes)"
1189
+ },
1190
+ {
1191
+ "type": "image",
1192
+ "bbox": [
1193
+ 0.508,
1194
+ 0.642,
1195
+ 0.822,
1196
+ 0.819
1197
+ ],
1198
+ "angle": 0,
1199
+ "content": null
1200
+ },
1201
+ {
1202
+ "type": "image_caption",
1203
+ "bbox": [
1204
+ 0.557,
1205
+ 0.823,
1206
+ 0.774,
1207
+ 0.836
1208
+ ],
1209
+ "angle": 0,
1210
+ "content": "(d) Clinic-150 dataset (150 classes)"
1211
+ },
1212
+ {
1213
+ "type": "image_caption",
1214
+ "bbox": [
1215
+ 0.113,
1216
+ 0.848,
1217
+ 0.884,
1218
+ 0.89
1219
+ ],
1220
+ "angle": 0,
1221
+ "content": "Figure 5: Accuracy of many-shot in-context learning across four datasets, with demonstration examples increasing from 1-shot up to a total of 64K tokens. The dashed lines indicate the average accuracy once the model's performance stabilizes."
1222
+ },
1223
+ {
1224
+ "type": "page_number",
1225
+ "bbox": [
1226
+ 0.481,
1227
+ 0.929,
1228
+ 0.521,
1229
+ 0.941
1230
+ ],
1231
+ "angle": 0,
1232
+ "content": "9818"
1233
+ }
1234
+ ],
1235
+ [
1236
+ {
1237
+ "type": "image",
1238
+ "bbox": [
1239
+ 0.111,
1240
+ 0.081,
1241
+ 0.49,
1242
+ 0.278
1243
+ ],
1244
+ "angle": 0,
1245
+ "content": null
1246
+ },
1247
+ {
1248
+ "type": "image_caption",
1249
+ "bbox": [
1250
+ 0.174,
1251
+ 0.285,
1252
+ 0.428,
1253
+ 0.3
1254
+ ],
1255
+ "angle": 0,
1256
+ "content": "(a) Examples are randomly arranged."
1257
+ },
1258
+ {
1259
+ "type": "image",
1260
+ "bbox": [
1261
+ 0.509,
1262
+ 0.08,
1263
+ 0.885,
1264
+ 0.277
1265
+ ],
1266
+ "angle": 0,
1267
+ "content": null
1268
+ },
1269
+ {
1270
+ "type": "image_caption",
1271
+ "bbox": [
1272
+ 0.538,
1273
+ 0.285,
1274
+ 0.856,
1275
+ 0.3
1276
+ ],
1277
+ "angle": 0,
1278
+ "content": "(b) Examples are arranged alternately by class."
1279
+ },
1280
+ {
1281
+ "type": "image_caption",
1282
+ "bbox": [
1283
+ 0.113,
1284
+ 0.311,
1285
+ 0.884,
1286
+ 0.355
1287
+ ],
1288
+ "angle": 0,
1289
+ "content": "Figure 6: Many-shot in-context learning accuracy on four datasets. The accuracy for both DIFF Transformer and DINT (Ours) models is presented, showing performance improvements across different numbers of demonstration samples."
1290
+ },
1291
+ {
1292
+ "type": "table",
1293
+ "bbox": [
1294
+ 0.126,
1295
+ 0.377,
1296
+ 0.476,
1297
+ 0.476
1298
+ ],
1299
+ "angle": 0,
1300
+ "content": "<table><tr><td>Size</td><td>Hidden Dim.</td><td>#Layers</td><td>#Heads</td></tr><tr><td>830M</td><td>1536</td><td>24</td><td>8</td></tr><tr><td>1.4B</td><td>2048</td><td>24</td><td>8</td></tr><tr><td>2.8B</td><td>2560</td><td>32</td><td>10</td></tr><tr><td>6.8B</td><td>4096</td><td>32</td><td>16</td></tr><tr><td>13.1B</td><td>5120</td><td>40</td><td>20</td></tr></table>"
1301
+ },
1302
+ {
1303
+ "type": "table_caption",
1304
+ "bbox": [
1305
+ 0.113,
1306
+ 0.487,
1307
+ 0.49,
1308
+ 0.56
1309
+ ],
1310
+ "angle": 0,
1311
+ "content": "Table 3: Model configurations for different sizes, including hidden dimension, number of layers, and number of attention heads. Each model was trained with a sequence length of 2048 and a batch size of 0.25 million tokens, for a total of 40K training steps."
1312
+ },
1313
+ {
1314
+ "type": "table",
1315
+ "bbox": [
1316
+ 0.128,
1317
+ 0.575,
1318
+ 0.476,
1319
+ 0.649
1320
+ ],
1321
+ "angle": 0,
1322
+ "content": "<table><tr><td rowspan=\"2\">Model</td><td>N=1</td><td>N=2</td><td>N=4</td><td>N=6</td></tr><tr><td>R=1</td><td>R=2</td><td>R=2</td><td>R=2</td></tr><tr><td>Transformer</td><td>1.00</td><td>0.85</td><td>0.62</td><td>0.55</td></tr><tr><td>DIFF</td><td>1.00</td><td>0.92</td><td>0.84</td><td>0.85</td></tr><tr><td>DINT</td><td>1.00</td><td>0.96</td><td>0.89</td><td>0.88</td></tr></table>"
1323
+ },
1324
+ {
1325
+ "type": "table_caption",
1326
+ "bbox": [
1327
+ 0.113,
1328
+ 0.659,
1329
+ 0.488,
1330
+ 0.716
1331
+ ],
1332
+ "angle": 0,
1333
+ "content": "Table 4: Multi-needle retrieval accuracy in 4K length contexts, averaged over the answer needle positions. \\(N\\) represents the number of needles, and \\(R\\) denotes the number of query cities."
1334
+ },
1335
+ {
1336
+ "type": "text",
1337
+ "bbox": [
1338
+ 0.113,
1339
+ 0.745,
1340
+ 0.491,
1341
+ 0.922
1342
+ ],
1343
+ "angle": 0,
1344
+ "content": "our DINT Transformer architecture. We evaluate the 3B-size language models that support 64K input length. We follow the evaluation protocol of (Bertsch et al., 2024) and use constrained decoding (Ratner et al., 2023). The number of demonstration samples is incrementally increased from 1-shot until the total length reaches 64K tokens. Specifically, we evaluate the models on the following datasets: TREC (Hovy et al., 2001) with 50 classes, Banking-77 (Casanueva et al., 2020) with 77 classes, and Clinic-150 (Larson et al., 2019) with 150 classes."
1345
+ },
1346
+ {
1347
+ "type": "text",
1348
+ "bbox": [
1349
+ 0.508,
1350
+ 0.38,
1351
+ 0.885,
1352
+ 0.492
1353
+ ],
1354
+ "angle": 0,
1355
+ "content": "The results show that DINT Transformer consistently outperforms DIFF Transformer across all datasets and varying numbers of demonstration samples. The improvement in average accuracy is substantial, with DINT achieving \\(2.8\\%\\) higher accuracy on TREC, \\(4.1\\%\\) on TREC-Fine, \\(4.3\\%\\) on Banking-77, and \\(1.8\\%\\) on Clinic-150."
1356
+ },
1357
+ {
1358
+ "type": "text",
1359
+ "bbox": [
1360
+ 0.508,
1361
+ 0.494,
1362
+ 0.885,
1363
+ 0.8
1364
+ ],
1365
+ "angle": 0,
1366
+ "content": "Robustness of In-Context Learning Figure 6 presents a comparison of the robustness between DIFF Transformer and DINT Transformer in the context of in-context learning. By analyzing how performance varies with different order permutations of the same set of demonstration examples, we find that smaller performance fluctuations reflect greater robustness and a reduced risk of catastrophic degradation. The evaluation protocol remains consistent with the previously outlined methodology. Figure 6 displays the results of this analysis on the TREC dataset. We examine two prompt configurations: randomly shuffled examples and examples arranged by class in an alternating pattern. In both configurations, DINT Transformer consistently shows smaller performance fluctuations compared to DIFF Transformer, demonstrating that our approach enhances robustness in in-context learning tasks."
1367
+ },
1368
+ {
1369
+ "type": "title",
1370
+ "bbox": [
1371
+ 0.51,
1372
+ 0.817,
1373
+ 0.688,
1374
+ 0.832
1375
+ ],
1376
+ "angle": 0,
1377
+ "content": "3.5 Ablation Studies"
1378
+ },
1379
+ {
1380
+ "type": "text",
1381
+ "bbox": [
1382
+ 0.508,
1383
+ 0.842,
1384
+ 0.884,
1385
+ 0.922
1386
+ ],
1387
+ "angle": 0,
1388
+ "content": "We perform ablation studies using 1.4B-parameter language models, with the same training setup as the 1.4B model in Section 3.2. Both models have 24 layers, with 16 attention heads for Transformer and 8 for DIFF Transformer, each having a head"
1389
+ },
1390
+ {
1391
+ "type": "page_number",
1392
+ "bbox": [
1393
+ 0.481,
1394
+ 0.928,
1395
+ 0.521,
1396
+ 0.941
1397
+ ],
1398
+ "angle": 0,
1399
+ "content": "9819"
1400
+ }
1401
+ ],
1402
+ [
1403
+ {
1404
+ "type": "table",
1405
+ "bbox": [
1406
+ 0.162,
1407
+ 0.082,
1408
+ 0.835,
1409
+ 0.181
1410
+ ],
1411
+ "angle": 0,
1412
+ "content": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"5\">Attention to Answer↑</td><td colspan=\"5\">Attention Noise↓</td></tr><tr><td>0%</td><td>25%</td><td>50%</td><td>75%</td><td>100%</td><td>0%</td><td>25%</td><td>50%</td><td>75%</td><td>100%</td></tr><tr><td>Transformer</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.07</td><td>0.09</td><td>0.51</td><td>0.54</td><td>0.52</td><td>0.49</td><td>0.49</td></tr><tr><td>DIFF</td><td>0.27</td><td>0.30</td><td>0.31</td><td>0.32</td><td>0.40</td><td>0.01</td><td>0.02</td><td>0.02</td><td>0.02</td><td>0.01</td></tr><tr><td>DINT (Ours)</td><td>0.35</td><td>0.38</td><td>0.40</td><td>0.41</td><td>0.45</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td></tr></table>"
1413
+ },
1414
+ {
1415
+ "type": "table_caption",
1416
+ "bbox": [
1417
+ 0.113,
1418
+ 0.189,
1419
+ 0.884,
1420
+ 0.234
1421
+ ],
1422
+ "angle": 0,
1423
+ "content": "Table 5: Attention scores allocated to answer spans and noise context in the key information retrieval task. The target answer is inserted at varying depths within the context. DINT Transformer allocates more attention to relevant information and effectively minimizes attention noise."
1424
+ },
1425
+ {
1426
+ "type": "table",
1427
+ "bbox": [
1428
+ 0.203,
1429
+ 0.246,
1430
+ 0.797,
1431
+ 0.432
1432
+ ],
1433
+ "angle": 0,
1434
+ "content": "<table><tr><td rowspan=\"2\">Model</td><td rowspan=\"2\">#Heads</td><td rowspan=\"2\">d</td><td rowspan=\"2\">GN</td><td rowspan=\"2\">Valid. Set↓</td><td colspan=\"2\">Fine-Grained Slices</td></tr><tr><td>AR-Hit↓</td><td>Others↓</td></tr><tr><td>DIFF</td><td>8</td><td>128</td><td>✓</td><td>3.062</td><td>0.880</td><td>3.247</td></tr><tr><td>-GroupNorm</td><td>8</td><td>128</td><td>✗</td><td>3.122</td><td>0.911</td><td>3.309</td></tr><tr><td>with λinit = 0.8</td><td>8</td><td>128</td><td>✓</td><td>3.065</td><td>0.883</td><td>3.250</td></tr><tr><td>with λinit = 0.5</td><td>8</td><td>128</td><td>✓</td><td>3.066</td><td>0.882</td><td>3.251</td></tr><tr><td>DINT (Ours)</td><td>8</td><td>128</td><td>✓</td><td>3.055</td><td>0.875</td><td>3.243</td></tr><tr><td>-GroupNorm</td><td>8</td><td>128</td><td>✗</td><td>3.075</td><td>0.893</td><td>3.256</td></tr><tr><td>with λinit = 0.8</td><td>8</td><td>128</td><td>✓</td><td>3.056</td><td>0.877</td><td>3.245</td></tr><tr><td>with λinit = 0.5</td><td>8</td><td>128</td><td>✓</td><td>3.058</td><td>0.878</td><td>3.245</td></tr></table>"
1435
+ },
1436
+ {
1437
+ "type": "table_caption",
1438
+ "bbox": [
1439
+ 0.336,
1440
+ 0.441,
1441
+ 0.66,
1442
+ 0.456
1443
+ ],
1444
+ "angle": 0,
1445
+ "content": "Table 6: Ablation Studies of 1.4B-Size Models."
1446
+ },
1447
+ {
1448
+ "type": "text",
1449
+ "bbox": [
1450
+ 0.114,
1451
+ 0.482,
1452
+ 0.254,
1453
+ 0.496
1454
+ ],
1455
+ "angle": 0,
1456
+ "content": "dimension of 128."
1457
+ },
1458
+ {
1459
+ "type": "text",
1460
+ "bbox": [
1461
+ 0.113,
1462
+ 0.509,
1463
+ 0.49,
1464
+ 0.605
1465
+ ],
1466
+ "angle": 0,
1467
+ "content": "Table 6 reports the fine-grained loss on the validation set, breaking it into two components: \"AR-Hit\" and \"Others.\" \"AR-Hit\" evaluates the model's ability to recall previously seen n-grams, while \"Others\" represents tokens that are either frequent or not recalled from the context."
1468
+ },
1469
+ {
1470
+ "type": "text",
1471
+ "bbox": [
1472
+ 0.113,
1473
+ 0.616,
1474
+ 0.49,
1475
+ 0.921
1476
+ ],
1477
+ "angle": 0,
1478
+ "content": "As shown in Table 6, we performed ablation studies on various design choices in DINT Transformer and compared them with Transformer variants. All models are of similar size and training FLOPs for a fair comparison. The results indicate that our method outperforms DIFF Transformer in both overall loss and fine-grained loss. When GroupNorm is removed, the performance of DIFF Transformer is significantly affected, while DINT Transformer shows a smaller impact. This is because we ensure the row normalization of the attention matrix, which improves the model's overall robustness. Additionally, when using constant initialization for lambda, we observe a slight decrease in performance, but the model still maintains a high level of performance. This demonstrates the effectiveness of our initialization method and shows that the model is robust to different initialization choices."
1479
+ },
1480
+ {
1481
+ "type": "title",
1482
+ "bbox": [
1483
+ 0.51,
1484
+ 0.481,
1485
+ 0.651,
1486
+ 0.496
1487
+ ],
1488
+ "angle": 0,
1489
+ "content": "4 Conclusions"
1490
+ },
1491
+ {
1492
+ "type": "text",
1493
+ "bbox": [
1494
+ 0.508,
1495
+ 0.518,
1496
+ 0.886,
1497
+ 0.664
1498
+ ],
1499
+ "angle": 0,
1500
+ "content": "We propose DINT Transformer, which integrates global attention statistics into DIFF Transformer to reduce noise and enhance focus on key words. This improves the model's ability to capture global information, ensuring better stability and scalability. Experiments show DINT Transformer excels in long-sequence modeling, key information retrieval, and in-context learning, making it highly promising for NLP tasks requiring global context awareness."
1501
+ },
1502
+ {
1503
+ "type": "title",
1504
+ "bbox": [
1505
+ 0.51,
1506
+ 0.691,
1507
+ 0.646,
1508
+ 0.707
1509
+ ],
1510
+ "angle": 0,
1511
+ "content": "5 Limitations"
1512
+ },
1513
+ {
1514
+ "type": "text",
1515
+ "bbox": [
1516
+ 0.508,
1517
+ 0.729,
1518
+ 0.885,
1519
+ 0.905
1520
+ ],
1521
+ "angle": 0,
1522
+ "content": "While the integration mechanism in DINT Transformer has significantly improved model performance, this design inevitably introduces additional computational complexity. These computational characteristics present new optimization opportunities for large-scale model deployment, particularly when processing long-sequence inputs. Through our algorithm-system co-design approach, we are actively developing more efficient implementations to further enhance the computational efficiency of DINT Transformer."
1523
+ },
1524
+ {
1525
+ "type": "page_number",
1526
+ "bbox": [
1527
+ 0.481,
1528
+ 0.928,
1529
+ 0.522,
1530
+ 0.941
1531
+ ],
1532
+ "angle": 0,
1533
+ "content": "9820"
1534
+ }
1535
+ ],
1536
+ [
1537
+ {
1538
+ "type": "title",
1539
+ "bbox": [
1540
+ 0.116,
1541
+ 0.085,
1542
+ 0.214,
1543
+ 0.099
1544
+ ],
1545
+ "angle": 0,
1546
+ "content": "References"
1547
+ },
1548
+ {
1549
+ "type": "ref_text",
1550
+ "bbox": [
1551
+ 0.117,
1552
+ 0.108,
1553
+ 0.49,
1554
+ 0.161
1555
+ ],
1556
+ "angle": 0,
1557
+ "content": "Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R. Gormley, and Graham Neubig. 2024. In-context learning with long-context models: An in-depth exploration. arXiv preprint, arXiv:2405.00200."
1558
+ },
1559
+ {
1560
+ "type": "ref_text",
1561
+ "bbox": [
1562
+ 0.117,
1563
+ 0.171,
1564
+ 0.489,
1565
+ 0.238
1566
+ ],
1567
+ "angle": 0,
1568
+ "content": "Inigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. 2020. Efficient intent detection with dual sentence encoders. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38-45."
1569
+ },
1570
+ {
1571
+ "type": "ref_text",
1572
+ "bbox": [
1573
+ 0.117,
1574
+ 0.247,
1575
+ 0.489,
1576
+ 0.313
1577
+ ],
1578
+ "angle": 0,
1579
+ "content": "L Gao, J Tow, B Abbasi, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, A Le Noac'h, and 1 others. 2023. A framework for few-shot language model evaluation, 12 2023. URL https://zenodo.org/records/10256836, 7."
1580
+ },
1581
+ {
1582
+ "type": "ref_text",
1583
+ "bbox": [
1584
+ 0.117,
1585
+ 0.323,
1586
+ 0.489,
1587
+ 0.363
1588
+ ],
1589
+ "angle": 0,
1590
+ "content": "Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama. URL: https://github.com/openlm-research/open_llama."
1591
+ },
1592
+ {
1593
+ "type": "ref_text",
1594
+ "bbox": [
1595
+ 0.117,
1596
+ 0.373,
1597
+ 0.489,
1598
+ 0.44
1599
+ ],
1600
+ "angle": 0,
1601
+ "content": "Eduard Hovy, Laurie Gerber, Ulf Hermjakob, ChinYew Lin, and Deepak Ravichandran. 2001. Toward semantics-based answer pinpointing. In Proceedings of the first international conference on Human language technology research."
1602
+ },
1603
+ {
1604
+ "type": "ref_text",
1605
+ "bbox": [
1606
+ 0.117,
1607
+ 0.45,
1608
+ 0.489,
1609
+ 0.489
1610
+ ],
1611
+ "angle": 0,
1612
+ "content": "Greg Kamradt. 2023. Needle in a haystack - pressure testing llms. https://github.com/gkamradt/LLMTest_NeedleInAHaystack/tree/main."
1613
+ },
1614
+ {
1615
+ "type": "ref_text",
1616
+ "bbox": [
1617
+ 0.117,
1618
+ 0.5,
1619
+ 0.49,
1620
+ 0.63
1621
+ ],
1622
+ "angle": 0,
1623
+ "content": "Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, and et al. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316."
1624
+ },
1625
+ {
1626
+ "type": "ref_text",
1627
+ "bbox": [
1628
+ 0.117,
1629
+ 0.641,
1630
+ 0.489,
1631
+ 0.693
1632
+ ],
1633
+ "angle": 0,
1634
+ "content": "Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel. 2024. World model on million-length video and language with ringattention. arXiv preprint, arXiv:2402.08268."
1635
+ },
1636
+ {
1637
+ "type": "ref_text",
1638
+ "bbox": [
1639
+ 0.117,
1640
+ 0.704,
1641
+ 0.489,
1642
+ 0.769
1643
+ ],
1644
+ "angle": 0,
1645
+ "content": "Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786."
1646
+ },
1647
+ {
1648
+ "type": "ref_text",
1649
+ "bbox": [
1650
+ 0.117,
1651
+ 0.78,
1652
+ 0.489,
1653
+ 0.833
1654
+ ],
1655
+ "angle": 0,
1656
+ "content": "Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. 2022. The devil in linear transformer. arXiv preprint arXiv:2210.10340."
1657
+ },
1658
+ {
1659
+ "type": "ref_text",
1660
+ "bbox": [
1661
+ 0.117,
1662
+ 0.844,
1663
+ 0.489,
1664
+ 0.884
1665
+ ],
1666
+ "angle": 0,
1667
+ "content": "Prajit Ramachandran, Barret Zoph, and Quoc V Le. 2017. Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941, 7(1):5."
1668
+ },
1669
+ {
1670
+ "type": "ref_text",
1671
+ "bbox": [
1672
+ 0.117,
1673
+ 0.894,
1674
+ 0.489,
1675
+ 0.921
1676
+ ],
1677
+ "angle": 0,
1678
+ "content": "Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon"
1679
+ },
1680
+ {
1681
+ "type": "list",
1682
+ "bbox": [
1683
+ 0.117,
1684
+ 0.108,
1685
+ 0.49,
1686
+ 0.921
1687
+ ],
1688
+ "angle": 0,
1689
+ "content": null
1690
+ },
1691
+ {
1692
+ "type": "ref_text",
1693
+ "bbox": [
1694
+ 0.529,
1695
+ 0.086,
1696
+ 0.884,
1697
+ 0.152
1698
+ ],
1699
+ "angle": 0,
1700
+ "content": "Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. Parallel context windows for large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, pages 6383-6402."
1701
+ },
1702
+ {
1703
+ "type": "ref_text",
1704
+ "bbox": [
1705
+ 0.512,
1706
+ 0.162,
1707
+ 0.884,
1708
+ 0.253
1709
+ ],
1710
+ "angle": 0,
1711
+ "content": "Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, and et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint, arXiv:2403.05530."
1712
+ },
1713
+ {
1714
+ "type": "ref_text",
1715
+ "bbox": [
1716
+ 0.512,
1717
+ 0.264,
1718
+ 0.883,
1719
+ 0.291
1720
+ ],
1721
+ "angle": 0,
1722
+ "content": "Noam Shazeer. 2020. Glu variants improve transformer. arXiv preprint arXiv:2002.05202."
1723
+ },
1724
+ {
1725
+ "type": "ref_text",
1726
+ "bbox": [
1727
+ 0.512,
1728
+ 0.3,
1729
+ 0.883,
1730
+ 0.379
1731
+ ],
1732
+ "angle": 0,
1733
+ "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, and 1 others. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971."
1734
+ },
1735
+ {
1736
+ "type": "ref_text",
1737
+ "bbox": [
1738
+ 0.512,
1739
+ 0.389,
1740
+ 0.883,
1741
+ 0.429
1742
+ ],
1743
+ "angle": 0,
1744
+ "content": "Jonathan Tow. 2023. Stablelm alpha v2 models. https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2."
1745
+ },
1746
+ {
1747
+ "type": "ref_text",
1748
+ "bbox": [
1749
+ 0.512,
1750
+ 0.439,
1751
+ 0.883,
1752
+ 0.478
1753
+ ],
1754
+ "angle": 0,
1755
+ "content": "Jonathan Tow, Marco Bellagente, Dakota Mahan, and Carlos Riquelme. 2023. Stablelm 3b 4e1t. https://aka.ms/StableLM-3B-4E1T."
1756
+ },
1757
+ {
1758
+ "type": "ref_text",
1759
+ "bbox": [
1760
+ 0.512,
1761
+ 0.488,
1762
+ 0.883,
1763
+ 0.516
1764
+ ],
1765
+ "angle": 0,
1766
+ "content": "A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems."
1767
+ },
1768
+ {
1769
+ "type": "ref_text",
1770
+ "bbox": [
1771
+ 0.512,
1772
+ 0.525,
1773
+ 0.883,
1774
+ 0.604
1775
+ ],
1776
+ "angle": 0,
1777
+ "content": "Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, and 1 others. 2023. Magneto: a foundation transformer. In International Conference on Machine Learning, pages 36077-36092. PMLR."
1778
+ },
1779
+ {
1780
+ "type": "ref_text",
1781
+ "bbox": [
1782
+ 0.512,
1783
+ 0.614,
1784
+ 0.883,
1785
+ 0.655
1786
+ ],
1787
+ "angle": 0,
1788
+ "content": "Yuxin Wu and Kaiming He. 2018. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3-19."
1789
+ },
1790
+ {
1791
+ "type": "ref_text",
1792
+ "bbox": [
1793
+ 0.512,
1794
+ 0.663,
1795
+ 0.883,
1796
+ 0.704
1797
+ ],
1798
+ "angle": 0,
1799
+ "content": "Tianzhu Ye, Li Dong, Yuqing Xia, Yutao Sun, Yi Zhu, Gao Huang, and Furu Wei. 2024. Differential transformer. arXiv preprint arXiv:2410.05258."
1800
+ },
1801
+ {
1802
+ "type": "ref_text",
1803
+ "bbox": [
1804
+ 0.512,
1805
+ 0.713,
1806
+ 0.883,
1807
+ 0.753
1808
+ ],
1809
+ "angle": 0,
1810
+ "content": "Biao Zhang and Rico Sennrich. 2019. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32."
1811
+ },
1812
+ {
1813
+ "type": "list",
1814
+ "bbox": [
1815
+ 0.512,
1816
+ 0.086,
1817
+ 0.884,
1818
+ 0.753
1819
+ ],
1820
+ "angle": 0,
1821
+ "content": null
1822
+ },
1823
+ {
1824
+ "type": "page_number",
1825
+ "bbox": [
1826
+ 0.481,
1827
+ 0.929,
1828
+ 0.519,
1829
+ 0.941
1830
+ ],
1831
+ "angle": 0,
1832
+ "content": "9821"
1833
+ }
1834
+ ]
1835
+ ]
2025/DINT Transformer/0d93eff8-c6dd-408c-9936-774d22532538_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:703e9eedb9129d4e6593122ca5163202a87435ef5983b7dd29cec28f382e8364
3
+ size 2349691
2025/DINT Transformer/full.md ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DINT Transformer
2
+
3
+ Yueyang Cang*
4
+
5
+ Yuhang Liu*
6
+
7
+ Xiaoteng Zhang
8
+
9
+ Tsinghua University / Beijing
10
+
11
+ {cangyy23,yh-liu23,zxt21}@mails.tsinghua.edu.cn
12
+
13
+ # Erlu Zhao
14
+
15
+ Peking University Health Science Center / BeiJing Tsinghua University / BeiJing
16
+
17
+ 2411210080@bjmu.edu.cn
18
+
19
+ zzushi@126.com
20
+
21
+ # Abstract
22
+
23
+ The DIFF Transformer mitigates interference from irrelevant contexts by introducing a differential attention mechanism, thereby enhancing focus on critical tokens. However, this architecture suffers from two major limitations: first, its use of two independent attention matrices leads to numerical instability, and second, it lacks global context modeling, which is essential for identifying globally significant tokens. To address these challenges, we propose the DINT Transformer, which extends the DIFF Transformer by incorporating an integral mechanism. By computing global importance scores and integrating them into the attention matrix, the DINT Transformer not only improves overall numerical stability but also significantly enhances its ability to capture global dependencies. Experimental results demonstrate that the DINT Transformer achieves superior accuracy and robustness across various practical applications, including long-context language modeling and key information retrieval. These advancements establish the DINT Transformer as a highly effective and promising architecture.
24
+
25
+ # 1 Introduction
26
+
27
+ Transformer(Vaswani, 2017), as one of the most popular models in the field of artificial intelligence today, is widely used in natural language processing, computer vision, and other fields, especially with the application of decoder-only architectures in large language models (LLMs). Its core lies in the attention mechanism based on softmax, which assigns importance to different tokens in a sequence. However, recent research(Lu et al., 2021) has found that LLMs face the challenge of attention noise when accurately focusing on key information in the context.
28
+
29
+ To address the issue of attention noise, DIFF Transformer(Ye et al., 2024) introduces a differen
30
+
31
+ tial attention mechanism that effectively suppresses the impact of irrelevant context by computing DIFFerence between two independent attention distributions. However, DIFF Transformer still exhibits significant limitations: The use of two independent attention matrices makes it difficult to accurately estimate weights for noisy components, resulting in numerical instability that may adversely affect downstream task performance.
32
+
33
+ Through our research, we observed that the semantic interpretation of most tokens in a sequence often depends on a few globally critical tokens. Taking sentence processing as an example, key elements such as subjects or main predicate verbs frequently serve as semantic anchors (as illustrated in Figure 1), playing a decisive role in constructing overall meaning. Building on this insight, we developed DINT Transformer by introducing an integral mechanism to extend DIFF Transformer. This integral component computes global importance scores, enabling the model to enhance its focus on critical tokens. Our proposed DINT Transformer not only further reduces attention noise by strengthening the focus on globally important tokens, but also significantly decreases the frequency of negative values in attention matrices through parametric design, thereby improving the model's overall numerical stability and substantially boosting performance.
34
+
35
+ Through comprehensive experiments on long-context language modeling and key information retrieval tasks, we rigorously validated the efficacy of DINT Transformer. The results demonstrate that DINT Transformer consistently outperforms both conventional Transformer and DIFF Transformer across all tasks. Its integral mechanism not only effectively captures global dependencies and further suppresses attention noise, but also significantly enhances model stability, successfully addressing inherent limitations of existing approaches. Moreover, while maintaining excellent scalability, DINT
36
+
37
+ ![](images/c5832b5874aae07cfe0d6fc9ccf2edcaf9db53c18224229424ae921d100551d7.jpg)
38
+ Einstein was born in Germany and was a physicist, mathematician, and philosopher.
39
+ Figure 1: The DIFF Transformer's use of two independent attention matrices results in a significant proportion of negative values in its final attention scores. In contrast, the DINT Transformer substantially reduces the occurrence of negative values, enhances numerical stability, and more effectively strengthens attention to globally important tokens—such as precisely focusing on key entities like "Newton" within sentences.
40
+
41
+ Transformer delivers substantial performance improvements in downstream tasks such as key information retrieval. These significant findings establish DINT Transformer as a robust foundational architecture for future advancements in sequence modeling and large language models.
42
+
43
+ # 2 DINT Transformer
44
+
45
+ DINT Transformer is designed as a robust architecture for sequence modeling, particularly for large language models (LLMs). The model consists of $L$ stacked layers, where each layer applies a DINT attention module followed by a feedforward network. Starting from token embeddings $X_0 \in \mathbb{R}^{N \times d_{\mathrm{model}}}$ , the input is progressively transformed through $L$ layers to produce the final output $X_L$ . The key innovation lies in the addition of an integral mechanism within the attention module, which enables effective modeling of global dependencies while preserving numerical stability. The overall structure aligns with common practices, incorporating pre-RMSNorm(Zhang and Sennrich, 2019) and SwiGLU(Ramachandran et al., 2017; Shazeer, 2020) for enhanced performance following LLaMA(Touvron et al., 2023). A diagram of the model architecture is shown in Figure 2.
46
+
47
+ # 2.1 DIFF Attention
48
+
49
+ DIFF attention introduces a differential attention mechanism that reduces attention noise by leveraging the difference between two attention distributions. Specifically, given the input $X \in \mathbb{R}^{N \times d_{\mathrm{model}}}$ ,
50
+
51
+ it is projected to query, key, and value matrices:
52
+
53
+ $$
54
+ [ Q _ {1}; Q _ {2} ] = X W _ {Q}, \quad [ K _ {1}; K _ {2} ] = X W _ {K}, \quad V = X W _ {V}, \tag {1}
55
+ $$
56
+
57
+ where $Q_{1}, Q_{2}, K_{1}, K_{2} \in \mathbb{R}^{N \times d}$ and $V \in \mathbb{R}^{N \times 2d}$ are the projected matrices, and $W_{Q}, W_{K}, W_{V} \in \mathbb{R}^{d_{\mathrm{model}} \times 2d}$ are learnable parameters. The differential attention operator computes the output as:
58
+
59
+ $$
60
+ \operatorname {D i f f A t t n} (X) = \left(\operatorname {s o f t m a x} \left(\frac {Q _ {1} K _ {1} ^ {\top}}{\sqrt {d}}\right) - \lambda \cdot \operatorname {s o f t m a x} \left(\frac {Q _ {2} K _ {2} ^ {\top}}{\sqrt {d}}\right)\right) V \tag {2}
61
+ $$
62
+
63
+ where $\lambda$ is a learnable scalar parameter. This differential mechanism effectively suppresses irrelevant context, enhancing the robustness of the attention scores by canceling common-mode noise, analogous to the operation of differential amplifiers in electrical engineering. To synchronize learning dynamics, $\lambda$ is re-parameterized as:
64
+
65
+ $$
66
+ \lambda = \exp \left(\lambda_ {q 1} \cdot \lambda_ {k 1}\right) - \exp \left(\lambda_ {q 2} \cdot \lambda_ {k 2}\right) + \lambda_ {\text {i n i t}}, \tag {3}
67
+ $$
68
+
69
+ where $\lambda_{q1},\lambda_{k1},\lambda_{q2},\lambda_{k2}\in \mathbb{R}^d$ are learnable vectors, and $\lambda_{\mathrm{init}}\in (0,1)$ is a constant used for initialization. Empirical results show that setting $\lambda_{\mathrm{init}} = 0.8 - 0.6\times \exp (-0.3\cdot (l - 1))$ , where $l\in [1,L]$ represents the layer index, works well in practice.
70
+
71
+ # 2.2 DINT Attention
72
+
73
+ DINT attention extends DIFF attention by introducing an integral mechanism, enhancing the model's ability to capture globally important information while maintaining numerical stability through row
74
+
75
+ ![](images/31ad5755d7c15747522595c809d40b7a5a348b94dbbb6239259f152dd0b25d9d.jpg)
76
+ Figure 2: Multi-head DINT Attention. DIFF Attention matrix implements reducing attention noise, while the Integration Attention matrix enhances global attention.
77
+
78
+ ![](images/7fe408db740e492ca4c65c75575a10fb090b5c9e260f8c7dede65700cf7ff21c.jpg)
79
+
80
+ ![](images/480685ce65be66fd92181b7ad59764bbfb94fd2ac8eaca3c5519cf0f37bc2c52.jpg)
81
+ (a) Scaling model size ranging from 830M to 13B.
82
+
83
+ ![](images/9a08374fb58333f784fbb45c631712cfff30c532a9f971665336402eb27bc4c6.jpg)
84
+ (b) Scaling number of training tokens for 3B models.
85
+ Figure 3: Language modeling loss of scaling up parameter count and training tokens. DINT Transformer outperforms other models, demonstrating that it requires fewer parameters or tokens to achieve comparable performance. (a) DINT Transformer matches the performance of larger models with fewer parameters. (b) DINT Transformer achieves comparable performance using significantly fewer training tokens.
86
+
87
+ normalization in the final attention matrix. The signal attention matrix $A_{1}$ is computed using $Q_{1}$ and $K_{1}$ :
88
+
89
+ $$
90
+ A _ {1} = \operatorname {s o f t m a x} \left(\frac {Q _ {1} K _ {1} ^ {\top}}{\sqrt {d}}\right). \tag {4}
91
+ $$
92
+
93
+ The integral component computes global importance scores by column-wise averaging of the signal attention weights. Crucially, to prevent information leakage, the averaging operation only considers tokens preceding the current token in the
94
+
95
+ sequence.
96
+
97
+ $$
98
+ G [ n,: ] = \frac {1}{n} \sum_ {m = 1} ^ {n} A _ {1} [ m,: ], \tag {5}
99
+ $$
100
+
101
+ where $G\in \mathbb{R}^{N\times N}$
102
+
103
+ DINT attention operator computes the output as:
104
+
105
+ $$
106
+ \operatorname {D I N T A t t n} (X) = \left(A _ {\text {d i f f}} + \gamma \cdot \operatorname {s o f t m a x} (G)\right) V, \tag {6}
107
+ $$
108
+
109
+ where $\gamma$ is a learnable scalar following DIFF Transformer, $A_{\mathrm{diff}}$ is DIFF attention component.
110
+
111
+ Unified Parameter Setting. By setting $\lambda$ and $\gamma$ to the same value, we ensure that the final attention
112
+
113
+ matrix $A_{\mathrm{final}}$ has rows that sum to 1. This row normalization guarantees numerical stability and consistency across the model, thusintaining data stability throughout the layers. This unified setting follows the parameterization method used in DIFF Transformer, further enhancing training stability.
114
+
115
+ # 2.3 Multi-Head Differential Attention
116
+
117
+ We also use the multi-head mechanism in DINT Transformer. Let $h$ denote the number of attention heads. We use different projection matrices $W_{Q}^{i}, W_{K}^{i}, W_{V}^{i}, i \in [1, h]$ for the heads. The scalar $\lambda$ is shared between heads within the same layer. Then the head outputs are normalized and projected to the final results as follows:
118
+
119
+ $$
120
+ \operatorname {h e a d} _ {i} = \operatorname {D i f f A t t n} \left(X; W _ {Q} ^ {i}, W _ {K} ^ {i}, W _ {V} ^ {i}, \lambda\right) \tag {7}
121
+ $$
122
+
123
+ $$
124
+ \overline {{\operatorname {h e a d} _ {i}}} = \operatorname {L N} \left(\operatorname {h e a d} _ {i}\right) \tag {8}
125
+ $$
126
+
127
+ $\operatorname{MultiHead}(X) = \operatorname{Concat}(\overline{\operatorname{head}}_1, \dots, \overline{\operatorname{head}}_h) W_O$ (9)
128
+
129
+ where $W_O \in \mathbb{R}^{d_{\mathrm{model}} \times d_{\mathrm{model}}}$ is a learnable projection matrix, $\mathrm{LN}(\cdot)$ uses RMSNorm for each head, and $\mathrm{Concat}(\cdot)$ concatenates the heads together along the channel dimension. Unlike DIFF Transformer, we do not apply an additional multiplier to the outputs of each head, as the unified parameter setting in DINT Transformer already ensures numerical stability and consistency. The number of heads is set as $h = d_{\mathrm{model}} / 2d$ , where $d$ is the head dimension of the Transformer, to ensure that the parameter count and computational complexity are aligned.
130
+
131
+ Headwise Normalization. Figure 2 illustrates the use of GroupNorm(Wu and He, 2018) within the attention mechanism to stabilize training. Although Layer Normalization (LN) is applied independently to each attention head, the sparse nature of differential attention often leads to varied statistical patterns across heads. By normalizing each head individually before the concatenation step, LN ensures more consistent gradient statistics, which contributes to improved training stability(Qin et al., 2022; Wang et al., 2023).
132
+
133
+ # 2.4 Overall Architecture
134
+
135
+ The overall architecture stacks $L$ layers, where each layer contains a multihead differential attention module and a feedforward network module. We describe DINT Transformer layer as:
136
+
137
+ $$
138
+ Y ^ {l} = \operatorname {M u l t i H e a d} \left(\mathrm {L N} \left(X ^ {l}\right)\right) + X ^ {l} \tag {10}
139
+ $$
140
+
141
+ $$
142
+ X ^ {l + 1} = \operatorname {S w i G L U} \left(\operatorname {L N} \left(Y ^ {l}\right)\right) + Y ^ {l} \tag {11}
143
+ $$
144
+
145
+ where $\mathrm{LN}(\cdot)$ is RMSNorm, and $\operatorname{SwiGLU}(X)$ is defined as:
146
+
147
+ $$
148
+ \operatorname {S w i G L U} (X) = \left(\operatorname {s w i s h} \left(X W _ {G}\right) \odot X W _ {1}\right) W _ {2},
149
+ $$
150
+
151
+ where $W_{G}, W_{1} \in \mathbb{R}^{d_{\mathrm{model}} \times \frac{8}{3} d_{\mathrm{model}}}$ , and $W_{2} \in \mathbb{R}^{\frac{8}{3} d_{\mathrm{model}} \times d_{\mathrm{model}}}$ are learnable matrices.
152
+
153
+ # 3 Experiments
154
+
155
+ In this study, we evaluate DINT Transformer through a series of experiments, comparing it with DIFF Transformer and other baseline models. Since DINT Transformer does not introduce new learnable parameters, only increasing computational complexity, its parameter count remains unchanged. Therefore, the model configurations used in the comparison were chosen to be the same as those of DIFF Transformer. Our experiments show that by enhancing attention to globally significant tokens, DINT Transformer effectively reduces attention noise. Additionally, DINT Transformer exhibits stronger stability compared to DIFF Transformer, leading to improved performance across tasks such as long-sequence modeling, key information retrieval, and in-context learning.
156
+
157
+ # 3.1 Language Modeling Evaluation
158
+
159
+ We trained a 3B DINT Transformer language model using the same configuration settings as the 3B DIFF Transformer language model. The model settings are shown in Table 1.
160
+
161
+ <table><tr><td>Params</td><td>Values</td></tr><tr><td>Layers</td><td>28</td></tr><tr><td>Hidden size</td><td>3072</td></tr><tr><td>FFN size</td><td>8192</td></tr><tr><td>Vocab size</td><td>100,288</td></tr><tr><td>Heads</td><td>12</td></tr><tr><td>Adam β</td><td>(0.9, 0.95)</td></tr><tr><td>LR</td><td>3.2 × 10-4</td></tr><tr><td>Batch size</td><td>4M</td></tr><tr><td>Warmup steps</td><td>1000</td></tr><tr><td>Weight decay</td><td>0.1</td></tr><tr><td>Dropout</td><td>0.0</td></tr></table>
162
+
163
+ Table 1: Configuration settings used for the 3B-size DINT Transformer and DIFF Transformer models.
164
+
165
+ Results. Table 2 presents the zero-shot evaluation results on the LM Eval Harness benchmark (Gao et al., 2023). We compare DINT Transformer
166
+
167
+ with other state-of-the-art Transformer-based models, including OpenLLaMA-v2-3B (Geng and Liu, 2023), StableLM-base-alpha-3B-v2 (Tow, 2023), and StableLM-3B-4E1T (Tow et al., 2023). All models were trained on 1T tokens under identical configurations to ensure fair comparison. The results demonstrate that DINT Transformer not only outperforms these baselines across all downstream tasks but also exhibits superior stability.
168
+
169
+ # 3.2 Scalability Compared with Transformer
170
+
171
+ We evaluated the scalability of DINT Transformer compared to the standard Transformer and DIFF Transformer, specifically focusing on language modeling tasks. This evaluation involved scaling both model size and the number of training tokens. We adopted an enhanced Transformer architecture similar to LLaMA, ensuring a fair comparison by using identical experimental setups.
172
+
173
+ Scaling Model Size As shown in Figure 3(a), DINT Transformer consistently outperformed both Transformer and DIFF Transformer across various model sizes (see Table 3 for model configurations). Specifically, DINT Transformer achieved comparable validation loss to the Transformer with $44\%$ fewer parameters and matched the performance of DIFF Transformer with $29\%$ fewer parameters. This demonstrates the superior efficiency and scalability of DINT Transformer in terms of parameter usage.
174
+
175
+ Scaling Training Tokens Figure 3(b) shows the results of scaling the number of training tokens. The fitted curves indicate that DINT Transformer achieved comparable performance to the Transformer with $33\%$ fewer training tokens. Additionally, DINT Transformer outperformed DIFF Transformer with $16\%$ fewer training tokens. These results highlight the significant data efficiency of DINT Transformer, achieving equivalent or superior results with considerably fewer resources.
176
+
177
+ # 3.3 Key Information Retrieval
178
+
179
+ The Needle-In-A-Haystack test (Kamradt, 2023) is used to evaluate the ability of models to extract key information from long contexts. Following the protocol of LWM (Liu et al., 2024) and Gemini 1.5 (Reid et al., 2024), "needles" are short sentences that assign a unique number to a city. The objective is to retrieve these numbers based on a given query.
180
+
181
+ We position the answer needle at different depths within the context (0%, 25%, 50%, 75%, 100%), while other needles are placed randomly. Each
182
+
183
+ combination of depth and context length is evaluated over 50 samples, and the average accuracy is reported.
184
+
185
+ Retrieve from 4K Context Length We evaluated the multi-needle retrieval task using 4K-length contexts, inserting $N = 1,2,4,6$ needles and retrieving $R = 1,2$ needles. The models used for evaluation were trained with an input length of 4K. As shown in Table 4, DINT Transformer consistently outperforms the other models. Particularly at $N = 6,R = 2$ , DINT achieves an accuracy of 0.88, significantly better than Transformer and DIFF models, indicating its superior ability to retrieve key information amidst distracting contexts.
186
+
187
+ Retrieve from 64K Context Length As shown in Figure 4, the context lengths evaluated range from 8K to 64K, with the configuration set to $N = 8$ , $R = 1$ . We evaluated the 3B-scale model with extended context (as described in Section 3.3). The accuracy is reported across different answer needle depths (y-axis) and context lengths (x-axis). The bottom row shows the average accuracy across all depths. From the figure, it can be observed that DINT Transformer consistently performs well across varying context lengths and needle depths. Notably, at a 40K context length and $25\%$ needle depth, DINT Transformer shows a $52\%$ improvement in accuracy compared to Transformer and a $12\%$ improvement compared to DIFF Transformer.
188
+
189
+ Attention Score Analysis Table 5 presents the attention scores assigned to the answer span and the noise context in the key information retrieval task. These scores reflect the model's ability to focus on relevant information while ignoring irrelevant noise. We compare the normalized attention scores for different depths (i.e., positions) of the target answer within the context. The results show that DINT Transformer allocates significantly higher attention to the correct answer span and exhibits a substantial reduction in attention noise.
190
+
191
+ # 3.4 In-Context Learning
192
+
193
+ We investigate in-context learning from two main angles: the performance on many-shot classification tasks and the model's ability to maintain robustness when utilizing context. In-context learning is an essential trait of language models, reflecting their capability to make effective use of the provided input context.
194
+
195
+ Many-Shot In-Context Learning As presented in Figure 5, we compare the accuracy of many-shot classification between DIFF Transformer and
196
+
197
+ <table><tr><td>Model</td><td>ARC-C</td><td>ARC-E</td><td>BoolQ</td><td>HellaSwag</td><td>OBQA</td><td>PIQA</td><td>WinoGrande</td><td>Avg</td></tr><tr><td>OpenLLaMA-3B-v2</td><td>33.9</td><td>67.6</td><td>65.7</td><td>70.0</td><td>26.6</td><td>76.7</td><td>62.9</td><td>57.5</td></tr><tr><td>StableLM-base-alpha-3B-v2</td><td>32.4</td><td>67.3</td><td>64.6</td><td>68.6</td><td>27.1</td><td>76.0</td><td>63.0</td><td>57.0</td></tr><tr><td>StableLM-3B-4E1T</td><td>-</td><td>66.6</td><td>-</td><td>-</td><td>-</td><td>76.8</td><td>63.2</td><td>-</td></tr><tr><td>DIFF-3B</td><td>36.9 ± 2.1</td><td>72.6 ± 1.7</td><td>69.2 ± 1.8</td><td>71.1 ± 2.4</td><td>29.1 ± 0.8</td><td>76.5 ± 1.0</td><td>69.2 ± 2.0</td><td>60.6</td></tr><tr><td>DINT-3B</td><td>39.2 ± 1.7</td><td>74.3 ± 1.3</td><td>70.7 ± 1.2</td><td>72.6 ± 1.7</td><td>30.3 ± 0.5</td><td>77.3 ± 0.6</td><td>72.0 ± 1.2</td><td>62.3</td></tr></table>
198
+
199
+ Table 2: Eval Harness accuracy compared with well-trained Transformer language models. The results indicate the superior performance of DINT Transformer over other models across a range of tasks.
200
+
201
+ ![](images/65797ce23fe06f57dab39f6766fae2cb20d0451f232d71c8b7501ec43c1ff022.jpg)
202
+ Figure 4: Multi-needle retrieval results in 64K length.
203
+
204
+ ![](images/34def1c886986bfdf2ed6c21043ec26f7630e459c2700ec8815cf001773a4b4b.jpg)
205
+ (a) TREC dataset (6 classes)
206
+
207
+ ![](images/2a3a072d61bca3ef79a89f07338c944c07f1d8b8da998d7b7d494fee68b701b3.jpg)
208
+ (b) TREC-fine dataset (50 classes)
209
+
210
+ ![](images/dfaaa09edd7b40873909753120f4e32211571bd4b7e4be95d12b01c270c9131d.jpg)
211
+ (c) Banking-77 dataset (77 classes)
212
+
213
+ ![](images/740297a760cc81126bbf6e43d425e3cfbdeab7b1782fb8cb0228f921c6e8eaeb.jpg)
214
+ (d) Clinic-150 dataset (150 classes)
215
+ Figure 5: Accuracy of many-shot in-context learning across four datasets, with demonstration examples increasing from 1-shot up to a total of 64K tokens. The dashed lines indicate the average accuracy once the model's performance stabilizes.
216
+
217
+ ![](images/a1ecdebe228fec7fb1e1ba60fa89203a44d762688b66f9150a21f07f26b4b063.jpg)
218
+ (a) Examples are randomly arranged.
219
+
220
+ ![](images/e10a43e12a60e35af6dff3d0a00fdde35b52e9e2d3008842f0f2ef78b1777c75.jpg)
221
+ (b) Examples are arranged alternately by class.
222
+ Figure 6: Many-shot in-context learning accuracy on four datasets. The accuracy for both DIFF Transformer and DINT (Ours) models is presented, showing performance improvements across different numbers of demonstration samples.
223
+
224
+ <table><tr><td>Size</td><td>Hidden Dim.</td><td>#Layers</td><td>#Heads</td></tr><tr><td>830M</td><td>1536</td><td>24</td><td>8</td></tr><tr><td>1.4B</td><td>2048</td><td>24</td><td>8</td></tr><tr><td>2.8B</td><td>2560</td><td>32</td><td>10</td></tr><tr><td>6.8B</td><td>4096</td><td>32</td><td>16</td></tr><tr><td>13.1B</td><td>5120</td><td>40</td><td>20</td></tr></table>
225
+
226
+ Table 3: Model configurations for different sizes, including hidden dimension, number of layers, and number of attention heads. Each model was trained with a sequence length of 2048 and a batch size of 0.25 million tokens, for a total of 40K training steps.
227
+
228
+ <table><tr><td rowspan="2">Model</td><td>N=1</td><td>N=2</td><td>N=4</td><td>N=6</td></tr><tr><td>R=1</td><td>R=2</td><td>R=2</td><td>R=2</td></tr><tr><td>Transformer</td><td>1.00</td><td>0.85</td><td>0.62</td><td>0.55</td></tr><tr><td>DIFF</td><td>1.00</td><td>0.92</td><td>0.84</td><td>0.85</td></tr><tr><td>DINT</td><td>1.00</td><td>0.96</td><td>0.89</td><td>0.88</td></tr></table>
229
+
230
+ Table 4: Multi-needle retrieval accuracy in 4K length contexts, averaged over the answer needle positions. $N$ represents the number of needles, and $R$ denotes the number of query cities.
231
+
232
+ our DINT Transformer architecture. We evaluate the 3B-size language models that support 64K input length. We follow the evaluation protocol of (Bertsch et al., 2024) and use constrained decoding (Ratner et al., 2023). The number of demonstration samples is incrementally increased from 1-shot until the total length reaches 64K tokens. Specifically, we evaluate the models on the following datasets: TREC (Hovy et al., 2001) with 50 classes, Banking-77 (Casanueva et al., 2020) with 77 classes, and Clinic-150 (Larson et al., 2019) with 150 classes.
233
+
234
+ The results show that DINT Transformer consistently outperforms DIFF Transformer across all datasets and varying numbers of demonstration samples. The improvement in average accuracy is substantial, with DINT achieving $2.8\%$ higher accuracy on TREC, $4.1\%$ on TREC-Fine, $4.3\%$ on Banking-77, and $1.8\%$ on Clinic-150.
235
+
236
+ Robustness of In-Context Learning Figure 6 presents a comparison of the robustness between DIFF Transformer and DINT Transformer in the context of in-context learning. By analyzing how performance varies with different order permutations of the same set of demonstration examples, we find that smaller performance fluctuations reflect greater robustness and a reduced risk of catastrophic degradation. The evaluation protocol remains consistent with the previously outlined methodology. Figure 6 displays the results of this analysis on the TREC dataset. We examine two prompt configurations: randomly shuffled examples and examples arranged by class in an alternating pattern. In both configurations, DINT Transformer consistently shows smaller performance fluctuations compared to DIFF Transformer, demonstrating that our approach enhances robustness in in-context learning tasks.
237
+
238
+ # 3.5 Ablation Studies
239
+
240
+ We perform ablation studies using 1.4B-parameter language models, with the same training setup as the 1.4B model in Section 3.2. Both models have 24 layers, with 16 attention heads for Transformer and 8 for DIFF Transformer, each having a head
241
+
242
+ <table><tr><td rowspan="2">Model</td><td colspan="5">Attention to Answer↑</td><td colspan="5">Attention Noise↓</td></tr><tr><td>0%</td><td>25%</td><td>50%</td><td>75%</td><td>100%</td><td>0%</td><td>25%</td><td>50%</td><td>75%</td><td>100%</td></tr><tr><td>Transformer</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.07</td><td>0.09</td><td>0.51</td><td>0.54</td><td>0.52</td><td>0.49</td><td>0.49</td></tr><tr><td>DIFF</td><td>0.27</td><td>0.30</td><td>0.31</td><td>0.32</td><td>0.40</td><td>0.01</td><td>0.02</td><td>0.02</td><td>0.02</td><td>0.01</td></tr><tr><td>DINT (Ours)</td><td>0.35</td><td>0.38</td><td>0.40</td><td>0.41</td><td>0.45</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td></tr></table>
243
+
244
+ Table 5: Attention scores allocated to answer spans and noise context in the key information retrieval task. The target answer is inserted at varying depths within the context. DINT Transformer allocates more attention to relevant information and effectively minimizes attention noise.
245
+
246
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">#Heads</td><td rowspan="2">d</td><td rowspan="2">GN</td><td rowspan="2">Valid. Set↓</td><td colspan="2">Fine-Grained Slices</td></tr><tr><td>AR-Hit↓</td><td>Others↓</td></tr><tr><td>DIFF</td><td>8</td><td>128</td><td>✓</td><td>3.062</td><td>0.880</td><td>3.247</td></tr><tr><td>-GroupNorm</td><td>8</td><td>128</td><td>✗</td><td>3.122</td><td>0.911</td><td>3.309</td></tr><tr><td>with λinit = 0.8</td><td>8</td><td>128</td><td>✓</td><td>3.065</td><td>0.883</td><td>3.250</td></tr><tr><td>with λinit = 0.5</td><td>8</td><td>128</td><td>✓</td><td>3.066</td><td>0.882</td><td>3.251</td></tr><tr><td>DINT (Ours)</td><td>8</td><td>128</td><td>✓</td><td>3.055</td><td>0.875</td><td>3.243</td></tr><tr><td>-GroupNorm</td><td>8</td><td>128</td><td>✗</td><td>3.075</td><td>0.893</td><td>3.256</td></tr><tr><td>with λinit = 0.8</td><td>8</td><td>128</td><td>✓</td><td>3.056</td><td>0.877</td><td>3.245</td></tr><tr><td>with λinit = 0.5</td><td>8</td><td>128</td><td>✓</td><td>3.058</td><td>0.878</td><td>3.245</td></tr></table>
247
+
248
+ Table 6: Ablation Studies of 1.4B-Size Models.
249
+
250
+ dimension of 128.
251
+
252
+ Table 6 reports the fine-grained loss on the validation set, breaking it into two components: "AR-Hit" and "Others." "AR-Hit" evaluates the model's ability to recall previously seen n-grams, while "Others" represents tokens that are either frequent or not recalled from the context.
253
+
254
+ As shown in Table 6, we performed ablation studies on various design choices in DINT Transformer and compared them with Transformer variants. All models are of similar size and training FLOPs for a fair comparison. The results indicate that our method outperforms DIFF Transformer in both overall loss and fine-grained loss. When GroupNorm is removed, the performance of DIFF Transformer is significantly affected, while DINT Transformer shows a smaller impact. This is because we ensure the row normalization of the attention matrix, which improves the model's overall robustness. Additionally, when using constant initialization for lambda, we observe a slight decrease in performance, but the model still maintains a high level of performance. This demonstrates the effectiveness of our initialization method and shows that the model is robust to different initialization choices.
255
+
256
+ # 4 Conclusions
257
+
258
+ We propose DINT Transformer, which integrates global attention statistics into DIFF Transformer to reduce noise and enhance focus on key words. This improves the model's ability to capture global information, ensuring better stability and scalability. Experiments show DINT Transformer excels in long-sequence modeling, key information retrieval, and in-context learning, making it highly promising for NLP tasks requiring global context awareness.
259
+
260
+ # 5 Limitations
261
+
262
+ While the integration mechanism in DINT Transformer has significantly improved model performance, this design inevitably introduces additional computational complexity. These computational characteristics present new optimization opportunities for large-scale model deployment, particularly when processing long-sequence inputs. Through our algorithm-system co-design approach, we are actively developing more efficient implementations to further enhance the computational efficiency of DINT Transformer.
263
+
264
+ # References
265
+
266
+ Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R. Gormley, and Graham Neubig. 2024. In-context learning with long-context models: An in-depth exploration. arXiv preprint, arXiv:2405.00200.
267
+ Inigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. 2020. Efficient intent detection with dual sentence encoders. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38-45.
268
+ L Gao, J Tow, B Abbasi, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, A Le Noac'h, and 1 others. 2023. A framework for few-shot language model evaluation, 12 2023. URL https://zenodo.org/records/10256836, 7.
269
+ Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama. URL: https://github.com/openlm-research/open_llama.
270
+ Eduard Hovy, Laurie Gerber, Ulf Hermjakob, ChinYew Lin, and Deepak Ravichandran. 2001. Toward semantics-based answer pinpointing. In Proceedings of the first international conference on Human language technology research.
271
+ Greg Kamradt. 2023. Needle in a haystack - pressure testing llms. https://github.com/gkamradt/LLMTest_NeedleInAHaystack/tree/main.
272
+ Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, and et al. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316.
273
+ Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel. 2024. World model on million-length video and language with ringattention. arXiv preprint, arXiv:2402.08268.
274
+ Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786.
275
+ Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. 2022. The devil in linear transformer. arXiv preprint arXiv:2210.10340.
276
+ Prajit Ramachandran, Barret Zoph, and Quoc V Le. 2017. Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941, 7(1):5.
277
+ Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon
278
+
279
+ Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. Parallel context windows for large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, pages 6383-6402.
280
+ Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, and et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint, arXiv:2403.05530.
281
+ Noam Shazeer. 2020. Glu variants improve transformer. arXiv preprint arXiv:2002.05202.
282
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, and 1 others. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
283
+ Jonathan Tow. 2023. Stablelm alpha v2 models. https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2.
284
+ Jonathan Tow, Marco Bellagente, Dakota Mahan, and Carlos Riquelme. 2023. Stablelm 3b 4e1t. https://aka.ms/StableLM-3B-4E1T.
285
+ A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems.
286
+ Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, and 1 others. 2023. Magneto: a foundation transformer. In International Conference on Machine Learning, pages 36077-36092. PMLR.
287
+ Yuxin Wu and Kaiming He. 2018. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3-19.
288
+ Tianzhu Ye, Li Dong, Yuqing Xia, Yutao Sun, Yi Zhu, Gao Huang, and Furu Wei. 2024. Differential transformer. arXiv preprint arXiv:2410.05258.
289
+ Biao Zhang and Rico Sennrich. 2019. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32.
2025/DINT Transformer/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4da9c7b344782b3000f232a5231acafba068d493e24d173be9cbcc3f44fae1a
3
+ size 653425
2025/DINT Transformer/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/ee33bf7a-a428-47bd-8acb-35c3dc3733a8_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/ee33bf7a-a428-47bd-8acb-35c3dc3733a8_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/ee33bf7a-a428-47bd-8acb-35c3dc3733a8_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0af61dc40886c34d4b9763519c08a2a6333e1ee7af098db0ffece607f8b4cef8
3
+ size 3801647
2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0c70fa39c615ee8b07e0c65ed3b54a4b0b5e6fbc4eae53e3e8d2624bd46a574
3
+ size 2506328
2025/DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India_ Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/DMDTEval_ An Evaluation and Analysis of LLMs on Disambiguation in Multi-domain Translation/ce78ff04-a550-4881-a64f-5059d60592ac_content_list.json ADDED
The diff for this file is too large to render. See raw diff