Chelsea707 commited on
Commit
47e36d0
·
verified ·
1 Parent(s): d8cbe90

Add Batch 49d3a092-3a7e-41b0-a9cd-04543870373e data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +63 -0
  2. 2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/0237f953-23bc-44fd-b727-785d584e994b_content_list.json +1723 -0
  3. 2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/0237f953-23bc-44fd-b727-785d584e994b_model.json +0 -0
  4. 2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/0237f953-23bc-44fd-b727-785d584e994b_origin.pdf +3 -0
  5. 2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/full.md +291 -0
  6. 2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/images.zip +3 -0
  7. 2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/layout.json +0 -0
  8. 2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/651ec223-6a7e-4d1c-bdf9-1d5f8132dcb7_content_list.json +0 -0
  9. 2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/651ec223-6a7e-4d1c-bdf9-1d5f8132dcb7_model.json +0 -0
  10. 2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/651ec223-6a7e-4d1c-bdf9-1d5f8132dcb7_origin.pdf +3 -0
  11. 2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/full.md +577 -0
  12. 2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/images.zip +3 -0
  13. 2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/layout.json +0 -0
  14. 2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/40bacbcf-2bbe-4071-992e-5b12d4460b40_content_list.json +0 -0
  15. 2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/40bacbcf-2bbe-4071-992e-5b12d4460b40_model.json +0 -0
  16. 2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/40bacbcf-2bbe-4071-992e-5b12d4460b40_origin.pdf +3 -0
  17. 2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/full.md +0 -0
  18. 2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/images.zip +3 -0
  19. 2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/layout.json +0 -0
  20. 2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/af6e964d-520b-4b92-a1b8-dbbc50874af3_content_list.json +0 -0
  21. 2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/af6e964d-520b-4b92-a1b8-dbbc50874af3_model.json +0 -0
  22. 2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/af6e964d-520b-4b92-a1b8-dbbc50874af3_origin.pdf +3 -0
  23. 2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/full.md +0 -0
  24. 2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/images.zip +3 -0
  25. 2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/layout.json +0 -0
  26. 2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/eec3886d-db73-41c8-8176-f1d6479fe735_content_list.json +0 -0
  27. 2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/eec3886d-db73-41c8-8176-f1d6479fe735_model.json +0 -0
  28. 2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/eec3886d-db73-41c8-8176-f1d6479fe735_origin.pdf +3 -0
  29. 2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/full.md +479 -0
  30. 2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/images.zip +3 -0
  31. 2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/layout.json +0 -0
  32. 2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/f38081e3-d6cf-4c34-b8dc-864bd3374a1b_content_list.json +0 -0
  33. 2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/f38081e3-d6cf-4c34-b8dc-864bd3374a1b_model.json +0 -0
  34. 2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/f38081e3-d6cf-4c34-b8dc-864bd3374a1b_origin.pdf +3 -0
  35. 2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/full.md +701 -0
  36. 2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/images.zip +3 -0
  37. 2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/layout.json +0 -0
  38. 2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/e95cc24c-c30e-4b66-8854-56fbc828fdb9_content_list.json +0 -0
  39. 2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/e95cc24c-c30e-4b66-8854-56fbc828fdb9_model.json +0 -0
  40. 2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/e95cc24c-c30e-4b66-8854-56fbc828fdb9_origin.pdf +3 -0
  41. 2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/full.md +400 -0
  42. 2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/images.zip +3 -0
  43. 2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/layout.json +0 -0
  44. 2025/Watermark Anything With Localized Messages/92c7c1c6-3f70-47a0-9a49-ed1f73de4119_content_list.json +0 -0
  45. 2025/Watermark Anything With Localized Messages/92c7c1c6-3f70-47a0-9a49-ed1f73de4119_model.json +0 -0
  46. 2025/Watermark Anything With Localized Messages/92c7c1c6-3f70-47a0-9a49-ed1f73de4119_origin.pdf +3 -0
  47. 2025/Watermark Anything With Localized Messages/full.md +0 -0
  48. 2025/Watermark Anything With Localized Messages/images.zip +3 -0
  49. 2025/Watermark Anything With Localized Messages/layout.json +0 -0
  50. 2025/WavTokenizer_ an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling/1a814df9-7a3f-46c8-af0e-4fdc8338bc15_content_list.json +0 -0
.gitattributes CHANGED
@@ -3280,3 +3280,66 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
3280
  2025/VisualAgentBench_[[:space:]]Towards[[:space:]]Large[[:space:]]Multimodal[[:space:]]Models[[:space:]]as[[:space:]]Visual[[:space:]]Foundation[[:space:]]Agents/676fdb11-f643-48dd-851f-7d6f17f25fbd_origin.pdf filter=lfs diff=lfs merge=lfs -text
3281
  2025/Visually[[:space:]]Consistent[[:space:]]Hierarchical[[:space:]]Image[[:space:]]Classification/99e7c309-af80-474a-8c59-e95455257282_origin.pdf filter=lfs diff=lfs merge=lfs -text
3282
  2025/Visually[[:space:]]Guided[[:space:]]Decoding_[[:space:]]Gradient-Free[[:space:]]Hard[[:space:]]Prompt[[:space:]]Inversion[[:space:]]with[[:space:]]Language[[:space:]]Models/c14ce079-f199-4061-98e1-b941ffc36cbf_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3280
  2025/VisualAgentBench_[[:space:]]Towards[[:space:]]Large[[:space:]]Multimodal[[:space:]]Models[[:space:]]as[[:space:]]Visual[[:space:]]Foundation[[:space:]]Agents/676fdb11-f643-48dd-851f-7d6f17f25fbd_origin.pdf filter=lfs diff=lfs merge=lfs -text
3281
  2025/Visually[[:space:]]Consistent[[:space:]]Hierarchical[[:space:]]Image[[:space:]]Classification/99e7c309-af80-474a-8c59-e95455257282_origin.pdf filter=lfs diff=lfs merge=lfs -text
3282
  2025/Visually[[:space:]]Guided[[:space:]]Decoding_[[:space:]]Gradient-Free[[:space:]]Hard[[:space:]]Prompt[[:space:]]Inversion[[:space:]]with[[:space:]]Language[[:space:]]Models/c14ce079-f199-4061-98e1-b941ffc36cbf_origin.pdf filter=lfs diff=lfs merge=lfs -text
3283
+ 2025/VoxDialogue_[[:space:]]Can[[:space:]]Spoken[[:space:]]Dialogue[[:space:]]Systems[[:space:]]Understand[[:space:]]Information[[:space:]]Beyond[[:space:]]Words_/0237f953-23bc-44fd-b727-785d584e994b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3284
+ 2025/W-PCA[[:space:]]Based[[:space:]]Gradient-Free[[:space:]]Proxy[[:space:]]for[[:space:]]Efficient[[:space:]]Search[[:space:]]of[[:space:]]Lightweight[[:space:]]Language[[:space:]]Models/651ec223-6a7e-4d1c-bdf9-1d5f8132dcb7_origin.pdf filter=lfs diff=lfs merge=lfs -text
3285
+ 2025/Ward_[[:space:]]Provable[[:space:]]RAG[[:space:]]Dataset[[:space:]]Inference[[:space:]]via[[:space:]]LLM[[:space:]]Watermarks/40bacbcf-2bbe-4071-992e-5b12d4460b40_origin.pdf filter=lfs diff=lfs merge=lfs -text
3286
+ 2025/WardropNet_[[:space:]]Traffic[[:space:]]Flow[[:space:]]Predictions[[:space:]]via[[:space:]]Equilibrium-Augmented[[:space:]]Learning/af6e964d-520b-4b92-a1b8-dbbc50874af3_origin.pdf filter=lfs diff=lfs merge=lfs -text
3287
+ 2025/Warm[[:space:]]Diffusion_[[:space:]]Recipe[[:space:]]for[[:space:]]Blur-Noise[[:space:]]Mixture[[:space:]]Diffusion[[:space:]]Models/eec3886d-db73-41c8-8176-f1d6479fe735_origin.pdf filter=lfs diff=lfs merge=lfs -text
3288
+ 2025/Wasserstein-Regularized[[:space:]]Conformal[[:space:]]Prediction[[:space:]]under[[:space:]]General[[:space:]]Distribution[[:space:]]Shift/f38081e3-d6cf-4c34-b8dc-864bd3374a1b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3289
+ 2025/Watch[[:space:]]Less,[[:space:]]Do[[:space:]]More_[[:space:]]Implicit[[:space:]]Skill[[:space:]]Discovery[[:space:]]for[[:space:]]Video-Conditioned[[:space:]]Policy/e95cc24c-c30e-4b66-8854-56fbc828fdb9_origin.pdf filter=lfs diff=lfs merge=lfs -text
3290
+ 2025/Watermark[[:space:]]Anything[[:space:]]With[[:space:]]Localized[[:space:]]Messages/92c7c1c6-3f70-47a0-9a49-ed1f73de4119_origin.pdf filter=lfs diff=lfs merge=lfs -text
3291
+ 2025/WavTokenizer_[[:space:]]an[[:space:]]Efficient[[:space:]]Acoustic[[:space:]]Discrete[[:space:]]Codec[[:space:]]Tokenizer[[:space:]]for[[:space:]]Audio[[:space:]]Language[[:space:]]Modeling/1a814df9-7a3f-46c8-af0e-4fdc8338bc15_origin.pdf filter=lfs diff=lfs merge=lfs -text
3292
+ 2025/Wavelet[[:space:]]Diffusion[[:space:]]Neural[[:space:]]Operator/3555dcda-08db-4159-9519-a96b98e20b86_origin.pdf filter=lfs diff=lfs merge=lfs -text
3293
+ 2025/Wavelet-based[[:space:]]Positional[[:space:]]Representation[[:space:]]for[[:space:]]Long[[:space:]]Context/5b26123b-d113-47c2-a042-ab759aa6cd33_origin.pdf filter=lfs diff=lfs merge=lfs -text
3294
+ 2025/Wayward[[:space:]]Concepts[[:space:]]In[[:space:]]Multimodal[[:space:]]Models/f38a5218-f361-4b8b-ab7d-90e8fd62e51e_origin.pdf filter=lfs diff=lfs merge=lfs -text
3295
+ 2025/Weak[[:space:]]to[[:space:]]Strong[[:space:]]Generalization[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Multi-capabilities/74284e09-8984-449e-8c8c-7adf10931eb2_origin.pdf filter=lfs diff=lfs merge=lfs -text
3296
+ 2025/Weak-to-Strong[[:space:]]Generalization[[:space:]]Through[[:space:]]the[[:space:]]Data-Centric[[:space:]]Lens/9e437d28-97ef-4d2c-aa66-82c2b344d113_origin.pdf filter=lfs diff=lfs merge=lfs -text
3297
+ 2025/Weakly[[:space:]]Supervised[[:space:]]Video[[:space:]]Scene[[:space:]]Graph[[:space:]]Generation[[:space:]]via[[:space:]]Natural[[:space:]]Language[[:space:]]Supervision/91021439-d497-4fe5-b11b-0c8d720ab646_origin.pdf filter=lfs diff=lfs merge=lfs -text
3298
+ 2025/Weakly-Supervised[[:space:]]Affordance[[:space:]]Grounding[[:space:]]Guided[[:space:]]by[[:space:]]Part-Level[[:space:]]Semantic[[:space:]]Priors/dc2f2116-5dab-471d-811e-0f5f48244ad7_origin.pdf filter=lfs diff=lfs merge=lfs -text
3299
+ 2025/WeatherGFM_[[:space:]]Learning[[:space:]]a[[:space:]]Weather[[:space:]]Generalist[[:space:]]Foundation[[:space:]]Model[[:space:]]via[[:space:]]In-context[[:space:]]Learning/112ef7a7-db0b-466a-9bab-75dac3685ab5_origin.pdf filter=lfs diff=lfs merge=lfs -text
3300
+ 2025/WebRL_[[:space:]]Training[[:space:]]LLM[[:space:]]Web[[:space:]]Agents[[:space:]]via[[:space:]]Self-Evolving[[:space:]]Online[[:space:]]Curriculum[[:space:]]Reinforcement[[:space:]]Learning/6e21610b-811d-4148-b13d-384191e1b507_origin.pdf filter=lfs diff=lfs merge=lfs -text
3301
+ 2025/Weighted[[:space:]]Multi-Prompt[[:space:]]Learning[[:space:]]with[[:space:]]Description-free[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Distillation/f9e34424-7882-4d57-98a9-0e87e4f889d8_origin.pdf filter=lfs diff=lfs merge=lfs -text
3302
+ 2025/Weighted-Reward[[:space:]]Preference[[:space:]]Optimization[[:space:]]for[[:space:]]Implicit[[:space:]]Model[[:space:]]Fusion/ec22ab08-984d-46e3-a1df-b952827a8be6_origin.pdf filter=lfs diff=lfs merge=lfs -text
3303
+ 2025/What[[:space:]]Are[[:space:]]Good[[:space:]]Positional[[:space:]]Encodings[[:space:]]for[[:space:]]Directed[[:space:]]Graphs_/caf86908-e97c-45a5-9ce4-b1f169a660b5_origin.pdf filter=lfs diff=lfs merge=lfs -text
3304
+ 2025/What[[:space:]]Do[[:space:]]You[[:space:]]See[[:space:]]in[[:space:]]Common_[[:space:]]Learning[[:space:]]Hierarchical[[:space:]]Prototypes[[:space:]]over[[:space:]]Tree-of-Life[[:space:]]to[[:space:]]Discover[[:space:]]Evolutionary[[:space:]]Traits/b2a96bf3-c242-4513-aaed-22183f88aa60_origin.pdf filter=lfs diff=lfs merge=lfs -text
3305
+ 2025/What[[:space:]]Makes[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Reason[[:space:]]in[[:space:]](Multi-Turn)[[:space:]]Code[[:space:]]Generation_/08b4e4f3-169c-43da-af0f-8df14c416e45_origin.pdf filter=lfs diff=lfs merge=lfs -text
3306
+ 2025/What[[:space:]]Makes[[:space:]]a[[:space:]]Maze[[:space:]]Look[[:space:]]Like[[:space:]]a[[:space:]]Maze_/9d0086ff-c8e9-4fc9-b214-7977056469dc_origin.pdf filter=lfs diff=lfs merge=lfs -text
3307
+ 2025/What[[:space:]]Matters[[:space:]]When[[:space:]]Repurposing[[:space:]]Diffusion[[:space:]]Models[[:space:]]for[[:space:]]General[[:space:]]Dense[[:space:]]Perception[[:space:]]Tasks_/365df977-0e0a-4100-a03f-63c426df0087_origin.pdf filter=lfs diff=lfs merge=lfs -text
3308
+ 2025/What[[:space:]]Matters[[:space:]]in[[:space:]]Learning[[:space:]]from[[:space:]]Large-Scale[[:space:]]Datasets[[:space:]]for[[:space:]]Robot[[:space:]]Manipulation/fe4b64df-3894-42c9-ba7c-ee966dc25669_origin.pdf filter=lfs diff=lfs merge=lfs -text
3309
+ 2025/What[[:space:]]Secrets[[:space:]]Do[[:space:]]Your[[:space:]]Manifolds[[:space:]]Hold_[[:space:]]Understanding[[:space:]]the[[:space:]]Local[[:space:]]Geometry[[:space:]]of[[:space:]]Generative[[:space:]]Models/5758ad22-54a0-477e-9404-fa3008054116_origin.pdf filter=lfs diff=lfs merge=lfs -text
3310
+ 2025/What[[:space:]]is[[:space:]]Wrong[[:space:]]with[[:space:]]Perplexity[[:space:]]for[[:space:]]Long-context[[:space:]]Language[[:space:]]Modeling_/72ac67ad-b6dc-4a8e-98f3-bf8d030b802a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3311
+ 2025/What[[:space:]]to[[:space:]]align[[:space:]]in[[:space:]]multimodal[[:space:]]contrastive[[:space:]]learning_/ebc06199-677a-4d08-9c66-467eca1acdd9_origin.pdf filter=lfs diff=lfs merge=lfs -text
3312
+ 2025/What's[[:space:]]New[[:space:]]in[[:space:]]My[[:space:]]Data_[[:space:]]Novelty[[:space:]]Exploration[[:space:]]via[[:space:]]Contrastive[[:space:]]Generation/cba97e16-b243-4761-a6ff-31aa9d2511ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
3313
+ 2025/What's[[:space:]]the[[:space:]]Move_[[:space:]]Hybrid[[:space:]]Imitation[[:space:]]Learning[[:space:]]via[[:space:]]Salient[[:space:]]Points/9b74763c-aebe-41ab-903c-6573b1b3eb81_origin.pdf filter=lfs diff=lfs merge=lfs -text
3314
+ 2025/When[[:space:]]GNNs[[:space:]]meet[[:space:]]symmetry[[:space:]]in[[:space:]]ILPs_[[:space:]]an[[:space:]]orbit-based[[:space:]]feature[[:space:]]augmentation[[:space:]]approach/add5da53-a950-499b-9503-ed293a3645d0_origin.pdf filter=lfs diff=lfs merge=lfs -text
3315
+ 2025/When[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks[[:space:]]Meet[[:space:]]Dynamic[[:space:]]Mode[[:space:]]Decomposition/c2a23d39-b2c5-494b-a589-ea8669d378e2_origin.pdf filter=lfs diff=lfs merge=lfs -text
3316
+ 2025/When[[:space:]]LLMs[[:space:]]Play[[:space:]]the[[:space:]]Telephone[[:space:]]Game_[[:space:]]Cultural[[:space:]]Attractors[[:space:]]as[[:space:]]Conceptual[[:space:]]Tools[[:space:]]to[[:space:]]Evaluate[[:space:]]LLMs[[:space:]]in[[:space:]]Multi-turn[[:space:]]Settings/5f68c786-10ad-4922-a488-86f211e8c091_origin.pdf filter=lfs diff=lfs merge=lfs -text
3317
+ 2025/When[[:space:]]Prompt[[:space:]]Engineering[[:space:]]Meets[[:space:]]Software[[:space:]]Engineering_[[:space:]]CNL-P[[:space:]]as[[:space:]]Natural[[:space:]]and[[:space:]]Robust[[:space:]]_APIs''[[:space:]]for[[:space:]]Human-AI[[:space:]]Interaction/fe649d5a-bbf3-4582-8c48-ac0022fd68a7_origin.pdf filter=lfs diff=lfs merge=lfs -text
3318
+ 2025/When[[:space:]]does[[:space:]]compositional[[:space:]]structure[[:space:]]yield[[:space:]]compositional[[:space:]]generalization_[[:space:]]A[[:space:]]kernel[[:space:]]theory./28e647fc-976f-41b5-a34b-a8ec18980b87_origin.pdf filter=lfs diff=lfs merge=lfs -text
3319
+ 2025/When[[:space:]]narrower[[:space:]]is[[:space:]]better_[[:space:]]the[[:space:]]narrow[[:space:]]width[[:space:]]limit[[:space:]]of[[:space:]]Bayesian[[:space:]]parallel[[:space:]]branching[[:space:]]neural[[:space:]]networks/037e7a43-588e-48f4-aeb4-f10472cf54a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
3320
+ 2025/Where[[:space:]]Am[[:space:]]I[[:space:]]and[[:space:]]What[[:space:]]Will[[:space:]]I[[:space:]]See_[[:space:]]An[[:space:]]Auto-Regressive[[:space:]]Model[[:space:]]for[[:space:]]Spatial[[:space:]]Localization[[:space:]]and[[:space:]]View[[:space:]]Prediction/f8b94e2a-bb7d-4a83-b247-e38d8f0aa7f0_origin.pdf filter=lfs diff=lfs merge=lfs -text
3321
+ 2025/Which[[:space:]]Tasks[[:space:]]Should[[:space:]]Be[[:space:]]Compressed[[:space:]]Together_[[:space:]]A[[:space:]]Causal[[:space:]]Discovery[[:space:]]Approach[[:space:]]for[[:space:]]Efficient[[:space:]]Multi-Task[[:space:]]Representation[[:space:]]Compression/76430cba-8e0a-493d-a7c8-f2672166a4e1_origin.pdf filter=lfs diff=lfs merge=lfs -text
3322
+ 2025/Why[[:space:]]Does[[:space:]]the[[:space:]]Effective[[:space:]]Context[[:space:]]Length[[:space:]]of[[:space:]]LLMs[[:space:]]Fall[[:space:]]Short_/b04da222-bc4c-46f7-9bb6-8fdd06d34377_origin.pdf filter=lfs diff=lfs merge=lfs -text
3323
+ 2025/Why[[:space:]]In-Context[[:space:]]Learning[[:space:]]Models[[:space:]]are[[:space:]]Good[[:space:]]Few-Shot[[:space:]]Learners_/d2e9fa4d-6789-4cd3-9aee-7dde80eaba40_origin.pdf filter=lfs diff=lfs merge=lfs -text
3324
+ 2025/Wicked[[:space:]]Oddities_[[:space:]]Selectively[[:space:]]Poisoning[[:space:]]for[[:space:]]Effective[[:space:]]Clean-Label[[:space:]]Backdoor[[:space:]]Attacks/9f0065f3-c32c-4e61-9e30-72f4a1be79d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
3325
+ 2025/Words[[:space:]]in[[:space:]]Motion_[[:space:]]Extracting[[:space:]]Interpretable[[:space:]]Control[[:space:]]Vectors[[:space:]]for[[:space:]]Motion[[:space:]]Transformers/5c510463-aca6-4b1b-9c83-a073a5b6523d_origin.pdf filter=lfs diff=lfs merge=lfs -text
3326
+ 2025/WorkflowLLM_[[:space:]]Enhancing[[:space:]]Workflow[[:space:]]Orchestration[[:space:]]Capability[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/251e79c3-521d-4d76-bd99-e6b2b869c61a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3327
+ 2025/World[[:space:]]Model[[:space:]]on[[:space:]]Million-Length[[:space:]]Video[[:space:]]And[[:space:]]Language[[:space:]]With[[:space:]]Blockwise[[:space:]]RingAttention/70a499c3-612d-4bd4-a8e8-974905a39225_origin.pdf filter=lfs diff=lfs merge=lfs -text
3328
+ 2025/X-Drive_[[:space:]]Cross-modality[[:space:]]Consistent[[:space:]]Multi-Sensor[[:space:]]Data[[:space:]]Synthesis[[:space:]]for[[:space:]]Driving[[:space:]]Scenarios/fa326aaa-bb49-4de7-9075-9006124bc096_origin.pdf filter=lfs diff=lfs merge=lfs -text
3329
+ 2025/X-Fi_[[:space:]]A[[:space:]]Modality-Invariant[[:space:]]Foundation[[:space:]]Model[[:space:]]for[[:space:]]Multimodal[[:space:]]Human[[:space:]]Sensing/73cee3d4-172e-4762-901f-8b8a36d9043b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3330
+ 2025/X-NeMo_[[:space:]]Expressive[[:space:]]Neural[[:space:]]Motion[[:space:]]Reenactment[[:space:]]via[[:space:]]Disentangled[[:space:]]Latent[[:space:]]Attention/9ea5035b-4e3a-4476-8ffe-49403c10a216_origin.pdf filter=lfs diff=lfs merge=lfs -text
3331
+ 2025/XAIguiFormer_[[:space:]]explainable[[:space:]]artificial[[:space:]]intelligence[[:space:]]guided[[:space:]]transformer[[:space:]]for[[:space:]]brain[[:space:]]disorder[[:space:]]identification/caf22a0e-05df-4f67-8479-97f4299dda87_origin.pdf filter=lfs diff=lfs merge=lfs -text
3332
+ 2025/XLand-100B_[[:space:]]A[[:space:]]Large-Scale[[:space:]]Multi-Task[[:space:]]Dataset[[:space:]]for[[:space:]]In-Context[[:space:]]Reinforcement[[:space:]]Learning/1121745f-6a5f-45e2-a5bf-099463d7d876_origin.pdf filter=lfs diff=lfs merge=lfs -text
3333
+ 2025/YOLO-RD_[[:space:]]Introducing[[:space:]]Relevant[[:space:]]and[[:space:]]Compact[[:space:]]Explicit[[:space:]]Knowledge[[:space:]]to[[:space:]]YOLO[[:space:]]by[[:space:]]Retriever-Dictionary/81ed7c69-341a-4a0b-bcb9-d3bc735bcc97_origin.pdf filter=lfs diff=lfs merge=lfs -text
3334
+ 2025/You[[:space:]]Only[[:space:]]Prune[[:space:]]Once_[[:space:]]Designing[[:space:]]Calibration-Free[[:space:]]Model[[:space:]]Compression[[:space:]]With[[:space:]]Policy[[:space:]]Learning/b7e87974-13a9-4b60-afce-0dddfbaf6a2c_origin.pdf filter=lfs diff=lfs merge=lfs -text
3335
+ 2025/You[[:space:]]Only[[:space:]]Sample[[:space:]]Once_[[:space:]]Taming[[:space:]]One-Step[[:space:]]Text-to-Image[[:space:]]Synthesis[[:space:]]by[[:space:]]Self-Cooperative[[:space:]]Diffusion[[:space:]]GANs/3ee8dcef-a938-4a27-8ae7-d28202398e1a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3336
+ 2025/YouTube-SL-25_[[:space:]]A[[:space:]]Large-Scale,[[:space:]]Open-Domain[[:space:]]Multilingual[[:space:]]Sign[[:space:]]Language[[:space:]]Parallel[[:space:]]Corpus/c0716502-c5d2-435d-90f0-efa1a639717c_origin.pdf filter=lfs diff=lfs merge=lfs -text
3337
+ 2025/Youku[[:space:]]Dense[[:space:]]Caption_[[:space:]]A[[:space:]]Large-scale[[:space:]]Chinese[[:space:]]Video[[:space:]]Dense[[:space:]]Caption[[:space:]]Dataset[[:space:]]and[[:space:]]Benchmarks/76804e10-3e34-4bb4-8330-30abd5af32b0_origin.pdf filter=lfs diff=lfs merge=lfs -text
3338
+ 2025/Your[[:space:]]Absorbing[[:space:]]Discrete[[:space:]]Diffusion[[:space:]]Secretly[[:space:]]Models[[:space:]]the[[:space:]]Conditional[[:space:]]Distributions[[:space:]]of[[:space:]]Clean[[:space:]]Data/e3effae2-6762-4c06-93e5-7dfc76ee398b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3339
+ 2025/Your[[:space:]]Weak[[:space:]]LLM[[:space:]]is[[:space:]]Secretly[[:space:]]a[[:space:]]Strong[[:space:]]Teacher[[:space:]]for[[:space:]]Alignment/6bda8e01-8dad-4809-bdc1-80a969530eed_origin.pdf filter=lfs diff=lfs merge=lfs -text
3340
+ 2025/ZETA_[[:space:]]Leveraging[[:space:]]$Z$-order[[:space:]]Curves[[:space:]]for[[:space:]]Efficient[[:space:]]Top-$k$[[:space:]]Attention/a33f481b-4b46-46cd-9d5c-22fedcf32365_origin.pdf filter=lfs diff=lfs merge=lfs -text
3341
+ 2025/ZIP_[[:space:]]An[[:space:]]Efficient[[:space:]]Zeroth-order[[:space:]]Prompt[[:space:]]Tuning[[:space:]]for[[:space:]]Black-box[[:space:]]Vision-Language[[:space:]]Models/3e6683fc-427a-41d5-9065-b313c61f5c31_origin.pdf filter=lfs diff=lfs merge=lfs -text
3342
+ 2025/Zero-Shot[[:space:]]Natural[[:space:]]Language[[:space:]]Explanations/0f947154-14ed-420f-9cc3-64214223be52_origin.pdf filter=lfs diff=lfs merge=lfs -text
3343
+ 2025/Zero-Shot[[:space:]]Whole-Body[[:space:]]Humanoid[[:space:]]Control[[:space:]]via[[:space:]]Behavioral[[:space:]]Foundation[[:space:]]Models/02de7d75-8361-4d66-8e49-9f408ef84c44_origin.pdf filter=lfs diff=lfs merge=lfs -text
3344
+ 2025/Zero-cost[[:space:]]Proxy[[:space:]]for[[:space:]]Adversarial[[:space:]]Robustness[[:space:]]Evaluation/1d4a0495-8e97-4324-9e7e-7de7cebd0051_origin.pdf filter=lfs diff=lfs merge=lfs -text
3345
+ 2025/Zero-shot[[:space:]]Imputation[[:space:]]with[[:space:]]Foundation[[:space:]]Inference[[:space:]]Models[[:space:]]for[[:space:]]Dynamical[[:space:]]Systems/c015f53e-656c-4ced-9770-c69adbfca6f8_origin.pdf filter=lfs diff=lfs merge=lfs -text
2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/0237f953-23bc-44fd-b727-785d584e994b_content_list.json ADDED
@@ -0,0 +1,1723 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "VOXDIALOGUE: CAN SPOKEN DIALOGUE SYSTEMS UNDERSTAND INFORMATION BEYOND WORDS?",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 173,
8
+ 98,
9
+ 823,
10
+ 148
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Xize Cheng $^{1*}$ Ruofan Hu $^{1*}$ Xiaoda Yang $^{1}$ Jingyu Lu $^{1}$ Dongjie Fu $^{1}$ Boyang Zhang $^{1}$ Zehan Wang $^{1}$ Shengpeng Ji $^{1}$ Rongjie Huang $^{1}$ Tao Jin $^{1}$ Zhou Zhao $^{1\\dagger}$",
17
+ "bbox": [
18
+ 179,
19
+ 167,
20
+ 807,
21
+ 200
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Zhejiang University<sup>1</sup> chengxize@zju.edu.cn",
28
+ "bbox": [
29
+ 183,
30
+ 200,
31
+ 491,
32
+ 214
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Code & Data: https://voxdialogue.github.io/",
39
+ "bbox": [
40
+ 183,
41
+ 214,
42
+ 542,
43
+ 227
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "ABSTRACT",
50
+ "text_level": 1,
51
+ "bbox": [
52
+ 450,
53
+ 265,
54
+ 545,
55
+ 279
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "With the rapid advancement of large models, voice assistants are gradually acquiring the ability to engage in open-ended daily conversations with humans. However, current spoken dialogue systems often overlook multi-modal information in audio beyond text, such as speech rate, volume, emphasis, and background sounds. Relying solely on automatic speech recognition (ASR) can lead to the loss of valuable auditory cues, thereby weakening the system's ability to generate contextually appropriate responses. To address this limitation, we propose VoxDialogue, a comprehensive benchmark for evaluating the ability of spoken dialogue systems to understand multi-modal information beyond text. Specifically, we have identified 12 attributes highly correlated with acoustic information beyond words and have meticulously designed corresponding spoken dialogue test sets for each attribute, encompassing a total of $4.5\\mathrm{K}$ multi-turn spoken dialogue samples. Finally, we evaluated several existing spoken dialogue models, analyzing their performance on the 12 attribute subsets of VoxDialogue. Experiments have shown that in spoken dialogue scenarios, many acoustic cues cannot be conveyed through textual information and must be directly interpreted from the audio input. In contrast, while direct spoken dialogue systems excel at processing acoustic signals, they still face limitations in handling complex dialogue tasks due to their restricted context understanding capabilities.",
62
+ "bbox": [
63
+ 228,
64
+ 297,
65
+ 767,
66
+ 563
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "1 INTRODUCTION",
73
+ "text_level": 1,
74
+ "bbox": [
75
+ 173,
76
+ 592,
77
+ 336,
78
+ 607
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "Voice assistants (Ji et al., 2024a) have rapidly evolved into a focal point of both academic research and industry innovation, aiming to facilitate daily conversations (Li et al., 2017; Lee et al., 2023) and task-oriented dialogues (Budzianowski et al., 2018; Si et al., 2024) with humans. Early iterations relied heavily on automatic speech recognition (ASR) (Cheng et al., 2023b;c; Fu et al., 2024; Lei et al., 2024; Huang et al., 2023), combined with dialogue understanding and state management, to support basic, predefined tasks. However, these systems (Hoy, 2018) were constrained by their limited scope and inability to handle open-ended interactions. The advent of large language models (LLMs) (Touvron et al., 2023) with enhanced understanding and reasoning capabilities has revolutionized voice assistants, enabling them to engage in more dynamic and unrestricted dialogues with users (OpenAI, 2024b). This marks a significant departure from their earlier, more constrained functionalities, opening up new possibilities for human-computer interaction.",
85
+ "bbox": [
86
+ 169,
87
+ 625,
88
+ 826,
89
+ 779
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "Yet, despite these advancements, current spoken dialogue systems (Zhang et al., 2023; Xie & Wu, 2024; Fang et al., 2024; Cheng et al., 2025) often overlook the rich multimodal information embedded in audio beyond mere spoken words—such as intonation, volume, rhythm, and background sounds. Relying solely on ASR leads to the omission of valuable auditory cues, diminishing the system's ability to generate contextually appropriate responses. For example, a system might fail to adjust its language to match a user's emotional state or regional accent, such as responding with \"Yes, madam\" to a female voice or adopting British colloquialisms when detecting a British accent.",
96
+ "bbox": [
97
+ 169,
98
+ 785,
99
+ 826,
100
+ 883
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "header",
106
+ "text": "Published as a conference paper at ICLR 2025",
107
+ "bbox": [
108
+ 171,
109
+ 32,
110
+ 478,
111
+ 47
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "page_footnote",
117
+ "text": "*Equal Contribution.",
118
+ "bbox": [
119
+ 189,
120
+ 896,
121
+ 320,
122
+ 910
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "page_footnote",
128
+ "text": "† Corresponding author.",
129
+ "bbox": [
130
+ 192,
131
+ 910,
132
+ 334,
133
+ 922
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "page_number",
139
+ "text": "1",
140
+ "bbox": [
141
+ 493,
142
+ 948,
143
+ 504,
144
+ 959
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "table",
150
+ "img_path": "images/26f6b4559e9fc931151d1eb228e9d2f04f108db4861c197b2cadda01036b3b2f.jpg",
151
+ "table_caption": [
152
+ "Table 1: Comparison of spoken language and audio comprehension benchmarks in terms of data types and evaluation dimensions. SL. refers to Spoken Language, while Dlg. indicates whether the benchmark evaluates on dialogue tasks. Aud. represents audio comprehension, and Mus. refers to music comprehension. Speaker Info includes attributes such as age (Age), gender (Gen), accent (Acc), and language (Lan). Paralinguistic Info covers aspects like emotion (Emo), volume (Vol), speech rate (Spd), speech fidelity (Fid), stress (Str), and non-verbal expressions (NVE). Although LeBenchmark includes a small amount of conversational data (29 hours out of 2933 hours), it does not evaluate on the dialogue tasks. Please note that although AirBench can assess spoken language comprehension, its evaluation of conversational ability (AirBench-Chat) is based on text-based interactions and does not address spoken dialogue capabilities."
153
+ ],
154
+ "table_footnote": [],
155
+ "table_body": "<table><tr><td rowspan=\"2\">Benchmarks</td><td colspan=\"2\">Types</td><td colspan=\"4\">Evaluation Dimensions</td></tr><tr><td>SL.</td><td>Dlg.</td><td>Aud.</td><td>Mus.</td><td>Speaker Info</td><td>Paralinguistic Info</td></tr><tr><td>SUPERB (Yang et al., 2021)</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>✓ (Emo)</td></tr><tr><td>SLUE (Shon et al., 2022)</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>LeBenchmark (Evain et al., 2021)</td><td>✓</td><td>X†</td><td>X</td><td>X</td><td>X</td><td>✓ (Emo)</td></tr><tr><td>AF-Dialogue (Kong et al., 2024)</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>X</td></tr><tr><td>AirBench (Yang et al., 2024a)</td><td>X‡</td><td>✓</td><td>✓</td><td>✓</td><td>✓ (Age,Gen)</td><td>✓ (Emo)</td></tr><tr><td>SpokenWOZ (Si et al., 2024)</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>SD-EVAL (Ao et al., 2024)</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>✓ (Age,Gen,Acc)</td><td>✓ (Emo)</td></tr><tr><td>VoxDialogue (ours)</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓ (Age,Gen,Acc,Lan)</td><td>✓ (Emo,Vol,Spd,Fid,Str,NVE)</td></tr></table>",
156
+ "bbox": [
157
+ 173,
158
+ 247,
159
+ 821,
160
+ 425
161
+ ],
162
+ "page_idx": 1
163
+ },
164
+ {
165
+ "type": "text",
166
+ "text": "To address these limitations, recent research has shifted towards developing multimodal audio-language models that enhance system comprehension of audio inputs. Emotion2Vec (Ma et al., 2023), trained on vast emotional speech data, stands as the first high-quality pre-trained model for emotion recognition. Qwen-Audio 1/2 (Chu et al., 2023; 2024) have been trained on extensive datasets encompassing over 30 audio-related tasks, enabling them to understand various audio types—including speech, audio events, and music. Pushing the envelope further, FunAudioLLM (SpeechTeam, 2024) offers full-scene recognition capabilities, detecting non-verbal sounds like laughter and breathing within speech.",
167
+ "bbox": [
168
+ 169,
169
+ 438,
170
+ 823,
171
+ 551
172
+ ],
173
+ "page_idx": 1
174
+ },
175
+ {
176
+ "type": "text",
177
+ "text": "As large-scale audio-language models continue to evolve rapidly, the scientific community has increasingly recognized the urgent need for a comprehensive benchmark to effectively evaluate spoken dialogue systems. While some progress has been made, existing benchmarks often exhibit notable shortcomings. For instance, SUPERB (Yang et al., 2021) is the first benchmark specifically designed for spoken language, but it primarily focuses on coarse-grained semantic understanding tasks, overlooking the importance of various acoustic features. Other benchmarks, such as AirBench (Yang et al., 2024a) and Audio-Flamingo (Kong et al., 2024), delve deeply into audio understanding, but their dialogue content is limited to the textual modality, making them unsuitable for evaluating spoken dialogue tasks. SpokenWOZ (Si et al., 2024), though valuable for its real human-computer interaction data, is restricted to task-driven dialogues and lacks detailed fine-grained labels. To address more specific attributes of spoken dialogue, SD-EVAL (Ao et al., 2024) shifts the focus to characteristics like gender, age, accent, and emotion, yet its effectiveness is limited by the use of speech utterances that are not derived from dialogue scenarios.",
178
+ "bbox": [
179
+ 169,
180
+ 556,
181
+ 826,
182
+ 737
183
+ ],
184
+ "page_idx": 1
185
+ },
186
+ {
187
+ "type": "text",
188
+ "text": "To better benchmark spoken dialogue systems, we analyzed non-textual multimodal acoustic information that may affect dialogue responses, which can be categorized into three main types: speaker information (age, gender, accent, language), paralinguistic information (emotion, volume, speed, fidelity, stress, and various non-verbal expressions), and background sounds (audio and music). In real-world dialogue scenarios, it is crucial to capture not only the semantic content of the speech but also these acoustic cues to generate more appropriate responses. For example, determining the speaker's age from their vocal tone can help select a suitable form of address. We designed a tailored spoken dialogue synthesis pipeline for each attribute to ensure that the synthesized dialogue data aligns accurately with the corresponding attribute. Leveraging the strong inference capabilities of large language models (LLMs) and high-fidelity text-to-speech (TTS) synthesis (Ji et al., 2024b; Du et al., 2024), we constructed the VoxDialogue benchmark, comprising 12 dialogue scenarios specifically tailored to different acoustic attributes. As shown in Figure 1, to the best of our knowledge, this is the most comprehensive work focusing on acoustic information in spoken dialogue",
189
+ "bbox": [
190
+ 169,
191
+ 743,
192
+ 826,
193
+ 925
194
+ ],
195
+ "page_idx": 1
196
+ },
197
+ {
198
+ "type": "header",
199
+ "text": "Published as a conference paper at ICLR 2025",
200
+ "bbox": [
201
+ 171,
202
+ 32,
203
+ 478,
204
+ 47
205
+ ],
206
+ "page_idx": 1
207
+ },
208
+ {
209
+ "type": "page_number",
210
+ "text": "2",
211
+ "bbox": [
212
+ 493,
213
+ 948,
214
+ 504,
215
+ 959
216
+ ],
217
+ "page_idx": 1
218
+ },
219
+ {
220
+ "type": "text",
221
+ "text": "benchmarks. Based on VoxDialogue, we evaluated several existing spoken dialogue systems, comparing the performance of ASR-based dialogue systems and direct dialogue systems across various acoustic-related tasks. The results demonstrate that ASR-based methods are limited in their ability to understand the diverse acoustic attributes present in spoken dialogues, highlighting the importance of developing large-scale audio-language models. At the same time, existing direct dialogue systems (such as Qwen2-Audio) still exhibit limitations in long-context reasoning, indicating the need for further improvement in their contextual understanding capabilities. All our code and data will be open-sourced. Our main contributions are:",
222
+ "bbox": [
223
+ 169,
224
+ 103,
225
+ 826,
226
+ 217
227
+ ],
228
+ "page_idx": 2
229
+ },
230
+ {
231
+ "type": "list",
232
+ "sub_type": "text",
233
+ "list_items": [
234
+ "- We present the first benchmark for evaluating the ability of spoken dialogue systems to understand acoustic information beyond speech content, VoxDialogue, which integrates 12 acoustic dimensions, including speaker attributes (age, gender, accent, language), paralinguistic features (emotion, volume, speed, fidelity, stress, non-verbal expressions), and environmental information (audio, music).",
235
+ "- We were the first to develop distinct spoken dialogue data synthesis methods tailored for different acoustic attributes. This approach enables large-scale synthesis of spoken dialogue data, supporting extensive training for spoken dialogue models and endowing them with more comprehensive acoustic understanding capabilities.",
236
+ "- We conducted a systematic evaluation of existing spoken dialogue systems, comparing their performance in terms of understanding acoustic information, supplemented by a qualitative analysis using a GPT-based metric. Specifically, inspired by the MOS (Mean Opinion Score) evaluation mechanism, we provided GPT with descriptive criteria corresponding to different scores, enabling the evaluation model to more accurately assess each response in terms of both acoustic attributes and content quality."
237
+ ],
238
+ "bbox": [
239
+ 169,
240
+ 227,
241
+ 826,
242
+ 444
243
+ ],
244
+ "page_idx": 2
245
+ },
246
+ {
247
+ "type": "text",
248
+ "text": "2 RELATED WORKS",
249
+ "text_level": 1,
250
+ "bbox": [
251
+ 171,
252
+ 464,
253
+ 354,
254
+ 479
255
+ ],
256
+ "page_idx": 2
257
+ },
258
+ {
259
+ "type": "text",
260
+ "text": "2.1 SPOKEN DIALOG SYSTEM",
261
+ "text_level": 1,
262
+ "bbox": [
263
+ 171,
264
+ 496,
265
+ 397,
266
+ 511
267
+ ],
268
+ "page_idx": 2
269
+ },
270
+ {
271
+ "type": "text",
272
+ "text": "With the development of large-scale language models, increasingly powerful spoken dialogue models have emerged, utilizing extensive training corpora for single tasks with LLM-based instructions to achieve comprehensive audio understanding capabilities. SpeechGPT (Zhang et al., 2023) integrates discrete speech units into large language models (LLMs), making it a speech-centric model. Qwen-Audio 1/2 (Chu et al., 2023; 2024) established the first large-scale, comprehensive audio model for over 30 audio-related tasks. Similarly, Salmonn (Tang et al., 2023) addresses task complexity in audio models by introducing more intricate story generation tasks. Additionally, some directly use dialogue databases for training. StyleTalk (Lin et al., 2024b) focused on emotional dialogue tasks and introduced the first spoken dialogue model capable of generating responses with varying emotional tones. Recent studies (Cheng et al., 2023c;b;a; Lei et al., 2024; Fu et al., 2024; Yang et al., 2024b) in conversational AI even have begun examining how visual data integration can enhance the contextual awareness of spoken dialogue systems.",
273
+ "bbox": [
274
+ 169,
275
+ 523,
276
+ 826,
277
+ 691
278
+ ],
279
+ "page_idx": 2
280
+ },
281
+ {
282
+ "type": "text",
283
+ "text": "However, existing spoken dialogue models (Xie & Wu, 2024; Fang et al., 2024) primarily focus on understanding speech content and audio information, with only a few works specifically addressing detailed acoustic attributes within the speech. This oversight results in the loss of crucial information in spoken dialogue, which, as our experiments show, can significantly undermine the quality and effectiveness of response generation in daily dialogue.",
284
+ "bbox": [
285
+ 169,
286
+ 696,
287
+ 826,
288
+ 768
289
+ ],
290
+ "page_idx": 2
291
+ },
292
+ {
293
+ "type": "text",
294
+ "text": "2.2 SPOKEN LANGUAGE BENCHMARK",
295
+ "text_level": 1,
296
+ "bbox": [
297
+ 171,
298
+ 786,
299
+ 454,
300
+ 800
301
+ ],
302
+ "page_idx": 2
303
+ },
304
+ {
305
+ "type": "text",
306
+ "text": "With the rapid development of large-scale audio models (Chu et al., 2024; SpeechTeam, 2024), the scientific community has increasingly recognized the need for a comprehensive benchmark to evaluate spoken dialogue systems. While some progress has been made, many existing benchmarks still fall short. For instance, SUPERB (Yang et al., 2021) was the first benchmark specifically designed for spoken language, but it primarily focuses on coarse-grained understanding tasks. AudioFlamingo (Kong et al., 2024) explores audio comprehension but is limited to text-based dialogue data, lacking a focus on evaluating spoken dialogue. AirBench (Yang et al., 2024a) was the first to propose using GPT-4 (OpenAI, 2024b) to measure spoken dialogue system performance, but its",
307
+ "bbox": [
308
+ 169,
309
+ 811,
310
+ 826,
311
+ 926
312
+ ],
313
+ "page_idx": 2
314
+ },
315
+ {
316
+ "type": "header",
317
+ "text": "Published as a conference paper at ICLR 2025",
318
+ "bbox": [
319
+ 171,
320
+ 32,
321
+ 478,
322
+ 47
323
+ ],
324
+ "page_idx": 2
325
+ },
326
+ {
327
+ "type": "page_number",
328
+ "text": "3",
329
+ "bbox": [
330
+ 493,
331
+ 948,
332
+ 504,
333
+ 959
334
+ ],
335
+ "page_idx": 2
336
+ },
337
+ {
338
+ "type": "table",
339
+ "img_path": "images/fac6cb634944cc03b47755363b12a6035b60c5842fadfbca0ccdb5da29d835a2.jpg",
340
+ "table_caption": [
341
+ "Table 2: Dialogue example of VoxDialog. The bold black text highlights the acoustic and background information unique to spoken dialogue, while the blue text emphasizes responses that depend on these information."
342
+ ],
343
+ "table_footnote": [],
344
+ "table_body": "<table><tr><td colspan=\"2\">Attributes Example</td></tr><tr><td colspan=\"2\">I. Speaker Information</td></tr><tr><td>Gender</td><td>A: Excuse me, I&#x27;m looking for a reliable smartphone that suits my needs. (Man Voice)B: Certainly, Sir! ... excellent performance and a long-lasting battery, perfect for business tasks.</td></tr><tr><td>Age</td><td>A: What dishes do you recommend? (Young Voice)B: Sure! You might enjoy our specialty beef burger, it&#x27;s fantastic with fresh juice.</td></tr><tr><td>Accent</td><td>A: Could you help me with my schedule? (British Accent)B: Of course, I&#x27;ll assist you in organizing your timetable.</td></tr><tr><td>Language</td><td>A:今天天气怎么样? (Speaking in Chinese)B:今天天气晴朗,气温适中,适合外出活动。</td></tr><tr><td colspan=\"2\">II. Paralinguistic Information</td></tr><tr><td>Emotion</td><td>A: Can you recommend a good movie? (Sad Tone)B: I get it. Maybe Soul—it&#x27;s emotional and might bring you some comfort.</td></tr><tr><td>Volume</td><td>A: Can you tell me how to solve this problem? (Loud Volume)B: Could you speak more quietly? People around are studying.</td></tr><tr><td>Speed</td><td>A: I&#x27;d like to get the burger, a soda, and also the pasta, please. (Speaking Quickly)B: Sorry, could you say that more slowly? I didn&#x27;t catch everything you said.</td></tr><tr><td>Fidelity</td><td>A: Can you tell me the time of the meeting? (Poor Audio Quality)B: I&#x27;m having trouble hearing you. Could you improve the audio quality?</td></tr><tr><td>Stress</td><td>A: I really don&#x27;t like making sushi. (Emphasis on &quot;making sushi&quot;)B: Oh, I see. If you don&#x27;t like making sushi, what other types of cooking do you enjoy?</td></tr><tr><td>Non-verbal</td><td>A: Could you help me move this box? (With a sigh before speaking)</td></tr><tr><td>Expressions</td><td>B: Are you feeling okay? It seems like you&#x27;re really tired. I can take care of it for you.</td></tr><tr><td colspan=\"2\">III. Environmental Information</td></tr><tr><td>Audio Events</td><td>What was that sound just now? (Background sound: airplane engine sound, explosion sound)That was a loud explosion. It sounded like the plane exploded. Hope no one was hurt.</td></tr><tr><td>Music</td><td>A: Hey, what instrument is this song played on? (Music: Piano Song, Sad Song)B: It should be the piano, it sounds so sad.</td></tr></table>",
345
+ "bbox": [
346
+ 173,
347
+ 148,
348
+ 823,
349
+ 592
350
+ ],
351
+ "page_idx": 3
352
+ },
353
+ {
354
+ "type": "text",
355
+ "text": "evaluation set remains constrained to QA interactions. SpokenWOZ (Si et al., 2024) is a large-scale task-oriented dataset that offers real human interaction data, making it valuable for evaluating task-driven dialogue systems. SD-Eval (Ao et al., 2024), which emphasizes acoustic attributes such as gender, age, accent, and emotion, uses raw audio from confessional-style corpora, making it less suitable for conversational scenarios.",
356
+ "bbox": [
357
+ 169,
358
+ 599,
359
+ 823,
360
+ 670
361
+ ],
362
+ "page_idx": 3
363
+ },
364
+ {
365
+ "type": "text",
366
+ "text": "However, due to the challenges associated with collecting spoken dialogue data in specific scenarios, existing benchmarks are unable to effectively evaluate whether spoken dialogue systems can understand various information beyond words. To address this limitation, we developed VoxDialogue, a benchmark built using synthetic data that focuses on 12 acoustic dimensions that can significantly influence dialogue content. These dimensions include speaker information (age, gender, accent, language), paralinguistic information (emotion, volume, speed, fidelity, stress, non-verbal expressions), and environmental information (audio events, music). Ultimately, VoxDialogue enables a comprehensive evaluation of the ability of current spoken dialogue systems to process and interpret such detailed acoustic information.",
367
+ "bbox": [
368
+ 169,
369
+ 676,
370
+ 826,
371
+ 801
372
+ ],
373
+ "page_idx": 3
374
+ },
375
+ {
376
+ "type": "text",
377
+ "text": "3 VOXDIALOGUE",
378
+ "text_level": 1,
379
+ "bbox": [
380
+ 171,
381
+ 823,
382
+ 336,
383
+ 838
384
+ ],
385
+ "page_idx": 3
386
+ },
387
+ {
388
+ "type": "text",
389
+ "text": "3.1 OVERVIEW",
390
+ "text_level": 1,
391
+ "bbox": [
392
+ 171,
393
+ 854,
394
+ 292,
395
+ 869
396
+ ],
397
+ "page_idx": 3
398
+ },
399
+ {
400
+ "type": "text",
401
+ "text": "Spoken dialogue systems are typically used in daily dialogues (Lin et al., 2024a). As shown in Table 2, we evaluate the performance of spoken dialogue systems across these three categories in daily dialogue scenarios. Beyond understanding the speech content, spoken dialogue systems must also",
402
+ "bbox": [
403
+ 169,
404
+ 881,
405
+ 825,
406
+ 925
407
+ ],
408
+ "page_idx": 3
409
+ },
410
+ {
411
+ "type": "header",
412
+ "text": "Published as a conference paper at ICLR 2025",
413
+ "bbox": [
414
+ 171,
415
+ 32,
416
+ 478,
417
+ 47
418
+ ],
419
+ "page_idx": 3
420
+ },
421
+ {
422
+ "type": "page_number",
423
+ "text": "4",
424
+ "bbox": [
425
+ 493,
426
+ 948,
427
+ 504,
428
+ 959
429
+ ],
430
+ "page_idx": 3
431
+ },
432
+ {
433
+ "type": "text",
434
+ "text": "generate the most appropriate responses by considering the speaker's emotions, gender, and other acoustic-related information. Therefore, unlike traditional text-based dialogue benchmarks (Li et al., 2017), we systematically analyze the acoustic characteristics that may influence response content and have developed a tailored evaluation set specifically for spoken dialogue systems. The evaluation set for daily dialogue is divided into the following categories: I. Speaker Information. (1) Age: Responses should be tailored to the speaker's age, adjusting salutations (e.g., Mrs./Miss) or suggesting content appropriate for their age group. (2) Gender: Responses should be gender-specific, modifying salutations (e.g., Mr./Mrs.) or offering preferences based on gender. (3) Accent: Responses should account for the speaker's accent, selecting vocabulary that aligns with their speech (e.g., British people may be more accustomed to using 'timetable' instead of 'schedule'). (4) Language: Responses should be adapted to the speaker's language, choosing the most appropriate language for the response. II. Acoustic Information. (5) Emotion: Responses should detect the speaker's emotional state and provide a suitable reply (e.g., suggesting comforting music when sensing distress). (6) Volume: Responses should consider the speaker's volume, asking them to lower or raise their voice (e.g., requesting quieter speech in quiet environments). (7) Speed: Responses should adjust to the speaker's speech rate, asking them to slow down or clarify if speaking too quickly for comprehension. (8) Fidelity: Responses should detect poor audio quality and ask the speaker to repeat or improve the clarity of their speech for better understanding. (9) Stress: Responses should recognize emphasis on specific words and tailor replies to focus on the stressed content. (10) Non-verbal Expressions: Responses should account for non-verbal cues such as sighs, detecting emotions like tiredness or frustration, and offering assistance accordingly. III. Background Sound. (11) Audio Event: Responses should recognize relevant audio events and adapt accordingly. (12) Music: Responses should adjust to the type and mood of the background music.",
435
+ "bbox": [
436
+ 169,
437
+ 103,
438
+ 826,
439
+ 422
440
+ ],
441
+ "page_idx": 4
442
+ },
443
+ {
444
+ "type": "text",
445
+ "text": "3.2 SPOKEN DIALOGUE GENERATION",
446
+ "text_level": 1,
447
+ "bbox": [
448
+ 171,
449
+ 439,
450
+ 452,
451
+ 455
452
+ ],
453
+ "page_idx": 4
454
+ },
455
+ {
456
+ "type": "text",
457
+ "text": "Stage1: Dialogue Script Synthesis. Building on the methodology of previous studies (Lin et al., 2024a; Cheng et al., 2025), we employed large language models with advanced reasoning capabilities to synthesize spoken conversation scripts tailored to diverse scenarios and acoustic conditions. Specifically, we utilized GPT-4o (OpenAI, 2024a) to pre-generate several rounds of historical conversations, followed by the generation of contextually appropriate responses under various controlled acoustic conditions. This approach ensures that the synthesized dialogue scripts capture a wide range of acoustic features, thereby enhancing their robustness and diversity.",
458
+ "bbox": [
459
+ 169,
460
+ 465,
461
+ 826,
462
+ 566
463
+ ],
464
+ "page_idx": 4
465
+ },
466
+ {
467
+ "type": "text",
468
+ "text": "Stage2: Spoken Dialogue Generation. We carefully tailored the most appropriate speech synthesis method for each attribute during the generation process. We designed a tailored spoken dialogue synthesis pipeline for each attribute to ensure that the synthesized dialogue data aligns accurately with the corresponding attribute: (1) Gender, Speed and Emotion. We use COSYVOICE-300M-INSTRUCT<sup>1</sup> to achieve condition speech generation based on gender and emotion by adjusting style instructions. (2) Stress, Language, and Non-verbal Expressions. We achieved control over these aspects by adjusting the text content in the COSYVOICE-300M-INSTRUCT (Stress, Non-verbal Expressions) and COSYVOICE-300M-SFT<sup>2</sup> (Language), adding $<$ stress $>$ /stress $>$ , [laughter], or changing the language of the text. (3) Volume, Fidelity, Audio Events, and Music. We used COSYVOICE-300M-SFT to generate the basic speech, then applied post-processing techniques to fine-tune these specific attributes. The details of post-processing are shown in Stage 4. (4) Age. We randomly selected 1,000 speaker samples of different ages from Hechmi et al. (2021) and Tawara et al. (2021) as reference timbres and used COSYVOICE-300M<sup>3</sup> for zero-shot TTS synthesis. (5) Accent. We used the industrial-grade TTS tool (edge-TTS<sup>4</sup>), which offers over 318 timbre references spanning various regions, languages, and genders to achieve precise accent generation.",
469
+ "bbox": [
470
+ 169,
471
+ 578,
472
+ 826,
473
+ 789
474
+ ],
475
+ "page_idx": 4
476
+ },
477
+ {
478
+ "type": "text",
479
+ "text": "Stage3: Automatic Verification for Spoken Dialogue. To ensure the quality of the synthesized spoken dialogue data, we first employed a pre-trained model to automatically filter out unqualified samples, removing those with generation errors and inconsistent timbre. Specifically, we used the Whisper model (Radford et al., 2023) to filter out all sentences with a word error rate (WER) greater",
480
+ "bbox": [
481
+ 169,
482
+ 801,
483
+ 826,
484
+ 861
485
+ ],
486
+ "page_idx": 4
487
+ },
488
+ {
489
+ "type": "list",
490
+ "sub_type": "ref_text",
491
+ "list_items": [
492
+ "<sup>1</sup>https://huggingface.co/FunAudioLLM/CosyVoice-300M-Instruct",
493
+ "2https://huggingface.co/FunAudioLLM/CosyVoice-300M-SFT",
494
+ "<sup>3</sup>https://huggingface.co/model-scope/CosyVoice-300M",
495
+ "4https://github.com/rany2/edge-tts"
496
+ ],
497
+ "bbox": [
498
+ 189,
499
+ 868,
500
+ 712,
501
+ 922
502
+ ],
503
+ "page_idx": 4
504
+ },
505
+ {
506
+ "type": "header",
507
+ "text": "Published as a conference paper at ICLR 2025",
508
+ "bbox": [
509
+ 171,
510
+ 32,
511
+ 478,
512
+ 47
513
+ ],
514
+ "page_idx": 4
515
+ },
516
+ {
517
+ "type": "page_number",
518
+ "text": "5",
519
+ "bbox": [
520
+ 493,
521
+ 948,
522
+ 504,
523
+ 959
524
+ ],
525
+ "page_idx": 4
526
+ },
527
+ {
528
+ "type": "image",
529
+ "img_path": "images/f4ec7dd1f239e2872eeea2645c1a280f948e140ce9424a0b9bf46134b0476e50.jpg",
530
+ "image_caption": [],
531
+ "image_footnote": [],
532
+ "bbox": [
533
+ 174,
534
+ 99,
535
+ 566,
536
+ 190
537
+ ],
538
+ "page_idx": 5
539
+ },
540
+ {
541
+ "type": "image",
542
+ "img_path": "images/f98b6134425a0ac27628d18d72cf4c9f9eca703da6aa040eff614c722093cb4a.jpg",
543
+ "image_caption": [
544
+ "(a) Word Cloud of VoxDialogue."
545
+ ],
546
+ "image_footnote": [],
547
+ "bbox": [
548
+ 174,
549
+ 208,
550
+ 565,
551
+ 297
552
+ ],
553
+ "page_idx": 5
554
+ },
555
+ {
556
+ "type": "image",
557
+ "img_path": "images/daebf828bd42bf148cddeba4b94dd2ee83b59584d6e59105a72cb2788e430651.jpg",
558
+ "image_caption": [
559
+ "(b) The Duration Distribution of Turns.",
560
+ "(c) The Duration Distribution of Dialogues.",
561
+ "Figure 1: Visualization of static analysis of VoxDialogue."
562
+ ],
563
+ "image_footnote": [],
564
+ "bbox": [
565
+ 176,
566
+ 315,
567
+ 563,
568
+ 404
569
+ ],
570
+ "page_idx": 5
571
+ },
572
+ {
573
+ "type": "image",
574
+ "img_path": "images/b3bcde75b6dd926c2ce7c2b42753b581a249006b3763be4c10a1456388ff3738.jpg",
575
+ "image_caption": [],
576
+ "image_footnote": [],
577
+ "bbox": [
578
+ 594,
579
+ 99,
580
+ 794,
581
+ 232
582
+ ],
583
+ "page_idx": 5
584
+ },
585
+ {
586
+ "type": "image",
587
+ "img_path": "images/b84f7bed2f3e790a852919f91b8d65116305019317c247284b9cf3dc49878acf.jpg",
588
+ "image_caption": [
589
+ "(d) Distribution of Each Attribute.",
590
+ "(e) Distribution of multi-turn dialogue."
591
+ ],
592
+ "image_footnote": [],
593
+ "bbox": [
594
+ 596,
595
+ 253,
596
+ 790,
597
+ 388
598
+ ],
599
+ "page_idx": 5
600
+ },
601
+ {
602
+ "type": "text",
603
+ "text": "than $5\\%$ , and applied speaker-diarization-3.1 (Plaquet & Bredin, 2023; Bredin, 2023) to eliminate samples with timbre inconsistencies in speeches of the same speaker throughout dialogue sequence.",
604
+ "bbox": [
605
+ 169,
606
+ 474,
607
+ 823,
608
+ 503
609
+ ],
610
+ "page_idx": 5
611
+ },
612
+ {
613
+ "type": "text",
614
+ "text": "Stage4: Post-processing for Specific Acoustic Attributes. For attributes such as volume, fidelity, audio events, and music, we performed post-processing to ensure that the audio aligns with the required expectations. For fidelity, according to the Nyquist-Shannon sampling theorem, the sampling rate must be at least twice the highest frequency of the signal to ensure lossless reconstruction. To capture frequencies up to $4\\mathrm{kHz}$ , the minimum sampling rate should be $8\\mathrm{kHz}$ . Therefore, we downsampled the speech to $4\\mathrm{kHz}$ (to simulate the loss of speech signal and represent 'poor' audio quality, resulting in the loss of some speech information) and then resampled it back to $16\\mathrm{kHz}$ to simulate poor audio fidelity. For volume, dialogue turns labeled as 'loud' were amplified to simulate by increasing the power 8-fold. For dialogue turns labeled as 'low', the audio power was reduced to $50\\%$ of its original level to simulate poor microphone reception. For audio events, a large language model is used to classify events as either temporary or continuous. Temporary audio events, such as a door slamming or a phone ringing, are brief sounds that occur momentarily and are spliced before the first voice segment. In contrast, continuous audio events, like background chatter or street noise, are prolonged and are looped as background sound throughout the conversation. For music, we randomly spliced it before the first speech segment or set it to play in a loop as background sound.",
615
+ "bbox": [
616
+ 169,
617
+ 518,
618
+ 826,
619
+ 727
620
+ ],
621
+ "page_idx": 5
622
+ },
623
+ {
624
+ "type": "text",
625
+ "text": "Stage5: Human Verification. While large language models (LLMs) are effective at following instructions and generating coherent conversation samples, they are primarily trained on text data and lack exposure to human spoken conversations. As a result, the automatically generated data may exhibit unnatural characteristics. To ensure the naturalness and logical consistency of the spoken conversation sample pairs with the audio features, we employ human annotators for additional quality checks.",
626
+ "bbox": [
627
+ 169,
628
+ 741,
629
+ 823,
630
+ 825
631
+ ],
632
+ "page_idx": 5
633
+ },
634
+ {
635
+ "type": "text",
636
+ "text": "3.3 DATASET STATISTICS",
637
+ "text_level": 1,
638
+ "bbox": [
639
+ 171,
640
+ 842,
641
+ 364,
642
+ 856
643
+ ],
644
+ "page_idx": 5
645
+ },
646
+ {
647
+ "type": "text",
648
+ "text": "Distribution of Attribute Categories. As shown in Figure 1 (d), the distribution of attribute categories in VoxDialogue is balanced, allowing for a comprehensive evaluation of spoken dialogue systems' understanding and dialogue capabilities across various acoustic attributes. In Figure 1 (a), we also present a word cloud of VoxDialogue, where it is evident that the dataset primarily con",
649
+ "bbox": [
650
+ 169,
651
+ 867,
652
+ 823,
653
+ 925
654
+ ],
655
+ "page_idx": 5
656
+ },
657
+ {
658
+ "type": "header",
659
+ "text": "Published as a conference paper at ICLR 2025",
660
+ "bbox": [
661
+ 171,
662
+ 32,
663
+ 478,
664
+ 47
665
+ ],
666
+ "page_idx": 5
667
+ },
668
+ {
669
+ "type": "page_number",
670
+ "text": "6",
671
+ "bbox": [
672
+ 493,
673
+ 948,
674
+ 504,
675
+ 959
676
+ ],
677
+ "page_idx": 5
678
+ },
679
+ {
680
+ "type": "table",
681
+ "img_path": "images/5682c8d103ffef6afb85570ebe81360519089104268733153d5adc10f1c9dc17.jpg",
682
+ "table_caption": [
683
+ "Table 3: Detailed statistics of the corresponding subsets of each attribute in VoxDialogue. Gray fonts indicate that samples of this attribute are included in other subsets. IN (India), CA (Canada), ZA (South Africa), GB (United Kingdom), SG (Singapore), US (United States), and AU (Australia). Turns represents the total number of turns in each subset, Dialog. indicates the number of dialogues in each subset, Avg denotes the average number of turns per dialogue in each subset, and Dur. refers to the total duration (in hours) of all dialogues in each subset."
684
+ ],
685
+ "table_footnote": [],
686
+ "table_body": "<table><tr><td>Attributes</td><td>Categories</td><td>Turns</td><td>Dialog.</td><td>Avg</td><td>Dur.</td></tr><tr><td colspan=\"6\">I. Speaker Information</td></tr><tr><td>Gender</td><td>Male, Female</td><td>2040</td><td>340</td><td>6.0</td><td>3.17</td></tr><tr><td>Age</td><td>Youth (15-30), Middle-Aged (30-60), Elderly (60+)</td><td>3096</td><td>447</td><td>6.9</td><td>6.05</td></tr><tr><td>Accent</td><td>IN, CA, ZA, GB, SG, US, AU</td><td>1440</td><td>240</td><td>6.0</td><td>2.20</td></tr><tr><td>Language</td><td>Chinese, English</td><td>2892</td><td>482</td><td>6.0</td><td>3.51</td></tr><tr><td colspan=\"6\">II. Paralinguistic Information</td></tr><tr><td>Emotion</td><td>Neutral, Happy, Sad, Angry, Surprised, Fearful, Diagusted</td><td>1980</td><td>330</td><td>6.0</td><td>2.41</td></tr><tr><td>Volume</td><td>Loud Volume, Low Volume, Normal Volume</td><td>1824</td><td>304</td><td>6.0</td><td>2.08</td></tr><tr><td>Speed</td><td>High Speed, Low Speed, Normal Speed</td><td>2184</td><td>364</td><td>6.0</td><td>2.93</td></tr><tr><td>Fidelity</td><td>Low Fidelity, Normal Fidelity</td><td>2196</td><td>366</td><td>6.0</td><td>3.36</td></tr><tr><td>Stress</td><td>Stress, No Stress</td><td>2354</td><td>392</td><td>6.0</td><td>2.51</td></tr><tr><td>NVE</td><td>Laughter, No Laughter</td><td>2046</td><td>341</td><td>6.0</td><td>3.68</td></tr><tr><td colspan=\"6\">III. Environmental Information</td></tr><tr><td>Audio</td><td>The caption of different audio. (e.g., The wind is blowing and rustling occurs.)</td><td>5000</td><td>500</td><td>10.0</td><td>5.25</td></tr><tr><td>Music</td><td>The aspect list of different music pieces. (e.g., [steeldrum, higher register, amateur recording])</td><td>3734</td><td>420</td><td>8.9</td><td>5.42</td></tr><tr><td>Overall</td><td></td><td>30.7K</td><td>4.5K</td><td>6.8</td><td>42.56</td></tr></table>",
687
+ "bbox": [
688
+ 173,
689
+ 202,
690
+ 823,
691
+ 503
692
+ ],
693
+ "page_idx": 6
694
+ },
695
+ {
696
+ "type": "text",
697
+ "text": "sists of daily dialogue, featuring a large number of natural spoken words such as “yeah,” which are representative of daily spoken interactions. This makes it suitable for assessing the performance of spoken dialogue systems in real-world dialogue scenarios. Additionally, the dataset contains numerous acoustically relevant keywords, such as “heard,” “loud,” and “sound,” further supporting the evaluation of acoustic-related aspects of dialogue understanding.",
698
+ "bbox": [
699
+ 169,
700
+ 534,
701
+ 826,
702
+ 606
703
+ ],
704
+ "page_idx": 6
705
+ },
706
+ {
707
+ "type": "text",
708
+ "text": "Distribution of Dialogue Turns and Duration. All dialogues in our dataset are multi-turn dialogues. In Figure 1 (e), we show the distribution of dialogue turns, with the majority consisting of 6 turns and a maximum of 10 turns. This allows for a comprehensive evaluation of spoken dialogue systems' ability to understand contexts of varying lengths. In addition, Figures 1 (b) and 1 (c) illustrate the distribution of each turn and the overall dialogue length, respectively, showing that most sentences are approximately 4 seconds long. This implies that the system must understand the context and reason effectively before generating a response.",
709
+ "bbox": [
710
+ 169,
711
+ 619,
712
+ 823,
713
+ 719
714
+ ],
715
+ "page_idx": 6
716
+ },
717
+ {
718
+ "type": "text",
719
+ "text": "Statistics for Subset of Each Attribute. We present the detailed statistics of each attribute in VoxDialogue in Table 3, covering 35 different categories across 12 attributes. The average number of turns per dialogue exceeds 6, with each attribute containing more than 300 dialogues, ensuring comprehensive reflection of dialogue capabilities.",
720
+ "bbox": [
721
+ 169,
722
+ 732,
723
+ 823,
724
+ 792
725
+ ],
726
+ "page_idx": 6
727
+ },
728
+ {
729
+ "type": "text",
730
+ "text": "4 BENCHMARK FOR SPOKEN DIALOGUE SYSTEM",
731
+ "text_level": 1,
732
+ "bbox": [
733
+ 171,
734
+ 810,
735
+ 602,
736
+ 825
737
+ ],
738
+ "page_idx": 6
739
+ },
740
+ {
741
+ "type": "text",
742
+ "text": "4.1 TASK DEFINITION",
743
+ "text_level": 1,
744
+ "bbox": [
745
+ 171,
746
+ 842,
747
+ 341,
748
+ 856
749
+ ],
750
+ "page_idx": 6
751
+ },
752
+ {
753
+ "type": "text",
754
+ "text": "The task of a spoken dialogue system is to generate appropriate responses based on the contextual information from the sequence of human dialogue (e.g., the user's utterance sequence) and the preceding assistant response sequence, where the total number of dialogue turns is denoted by $t$ . The goal of the spoken dialogue system is to generate the most suitable response based on the previous",
755
+ "bbox": [
756
+ 169,
757
+ 867,
758
+ 823,
759
+ 926
760
+ ],
761
+ "page_idx": 6
762
+ },
763
+ {
764
+ "type": "header",
765
+ "text": "Published as a conference paper at ICLR 2025",
766
+ "bbox": [
767
+ 171,
768
+ 32,
769
+ 478,
770
+ 47
771
+ ],
772
+ "page_idx": 6
773
+ },
774
+ {
775
+ "type": "page_number",
776
+ "text": "7",
777
+ "bbox": [
778
+ 493,
779
+ 948,
780
+ 504,
781
+ 959
782
+ ],
783
+ "page_idx": 6
784
+ },
785
+ {
786
+ "type": "image",
787
+ "img_path": "images/8e2bd697647489332f0230fe03e22b0cfcd58781dc01fdb589467dc98833bb66.jpg",
788
+ "image_caption": [
789
+ "(a) Comparison of BLEU Across Methods and Attributes."
790
+ ],
791
+ "image_footnote": [],
792
+ "bbox": [
793
+ 174,
794
+ 101,
795
+ 823,
796
+ 200
797
+ ],
798
+ "page_idx": 7
799
+ },
800
+ {
801
+ "type": "image",
802
+ "img_path": "images/fd8250fef90ac141cdfb3cd4cd0e72be3b98344f686bb88a36d6711ed70f9ee0.jpg",
803
+ "image_caption": [
804
+ "(b) Comparison of ROUGE-L Across Methods and Attributes."
805
+ ],
806
+ "image_footnote": [],
807
+ "bbox": [
808
+ 174,
809
+ 234,
810
+ 823,
811
+ 335
812
+ ],
813
+ "page_idx": 7
814
+ },
815
+ {
816
+ "type": "image",
817
+ "img_path": "images/335ab2868fbc1bc99fc519b8712e8bd06531281e417d33719946341b8772de09.jpg",
818
+ "image_caption": [
819
+ "(c) Comparison of METEOR Across Methods and Attributes."
820
+ ],
821
+ "image_footnote": [],
822
+ "bbox": [
823
+ 174,
824
+ 369,
825
+ 823,
826
+ 469
827
+ ],
828
+ "page_idx": 7
829
+ },
830
+ {
831
+ "type": "image",
832
+ "img_path": "images/23a282e6e73083d9529e8e73ba11245757698dd6a316d80fb39b745394dcdf7b.jpg",
833
+ "image_caption": [
834
+ "(d) Comparison of BERTScore Across Methods and Attributes.",
835
+ "Figure 2: The comparison of spoken dialogue performance across 12 different attribute-specific test sets on the VoxDialogue dataset."
836
+ ],
837
+ "image_footnote": [],
838
+ "bbox": [
839
+ 174,
840
+ 503,
841
+ 823,
842
+ 603
843
+ ],
844
+ "page_idx": 7
845
+ },
846
+ {
847
+ "type": "text",
848
+ "text": "$t$ utterances and the $t - 1$ historical replies. In our work, we evaluate the performance of the spoken dialogue system by focusing solely on the final utterance of each dialogue.",
849
+ "bbox": [
850
+ 169,
851
+ 691,
852
+ 823,
853
+ 722
854
+ ],
855
+ "page_idx": 7
856
+ },
857
+ {
858
+ "type": "text",
859
+ "text": "4.2 EVALUATION METRICS",
860
+ "text_level": 1,
861
+ "bbox": [
862
+ 171,
863
+ 737,
864
+ 377,
865
+ 750
866
+ ],
867
+ "page_idx": 7
868
+ },
869
+ {
870
+ "type": "text",
871
+ "text": "To assess the model's performance, we conducted separate tests on a subset of Voxdialogue. Drawing on previous research (Ao et al., 2024), we utilized both quantitative and qualitative metrics for a comprehensive evaluation. The quantitative evaluation focused on two key aspects: content and style. For content evaluation, we employed widely recognized text generation metrics, including vocabulary-level measures such as BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and ME-TEOR (Banerjee & Lavie, 2005), alongside semantic-level metrics like BERTScore (Zhang et al., 2019). For style evaluation, we calculated the weighted F1 score of speech sentiment.",
872
+ "bbox": [
873
+ 169,
874
+ 763,
875
+ 826,
876
+ 862
877
+ ],
878
+ "page_idx": 7
879
+ },
880
+ {
881
+ "type": "text",
882
+ "text": "In addition to these quantitative assessments, we conducted a qualitative analysis using GPT-based metric (Yang et al., 2024a). The meaning of each score is as follows: 1: Contextually relevant but lacks attribute information. 2: Partially relevant to the context but feels unnatural, with no attribute information. 3: Partially relevant to the context, with mention of the attribute. 4: Contextually",
883
+ "bbox": [
884
+ 169,
885
+ 868,
886
+ 825,
887
+ 925
888
+ ],
889
+ "page_idx": 7
890
+ },
891
+ {
892
+ "type": "header",
893
+ "text": "Published as a conference paper at ICLR 2025",
894
+ "bbox": [
895
+ 173,
896
+ 32,
897
+ 478,
898
+ 47
899
+ ],
900
+ "page_idx": 7
901
+ },
902
+ {
903
+ "type": "page_number",
904
+ "text": "8",
905
+ "bbox": [
906
+ 493,
907
+ 948,
908
+ 504,
909
+ 959
910
+ ],
911
+ "page_idx": 7
912
+ },
913
+ {
914
+ "type": "table",
915
+ "img_path": "images/20bb1ff9650d98d2421d203d90f9bea18ee1d65e4b86f7460452aa8eeab218d8.jpg",
916
+ "table_caption": [
917
+ "Table 4: GPT-based Metric Comparison of Different Spoken Dialogue Models on VoxDialogue."
918
+ ],
919
+ "table_footnote": [],
920
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">Speaker Info</td><td colspan=\"6\">Paralinguistic Info</td><td colspan=\"2\">Env Info</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td colspan=\"13\">ASR-Based Spoken Dialogue System</td></tr><tr><td>FunAudioLLM (SpeechTeam, 2024)</td><td>4.32</td><td>4.39</td><td>3.57</td><td>4.61</td><td>4.09</td><td>1.82</td><td>1.92</td><td>1.79</td><td>3.13</td><td>2.87</td><td>3.47</td><td>3.59</td></tr><tr><td colspan=\"13\">Direct Spoken Dialogue System</td></tr><tr><td>Audio-Flamingo (Kong et al., 2024)</td><td>1.00</td><td>1.00</td><td>1.04</td><td>1.72</td><td>1.00</td><td>1.20</td><td>1.14</td><td>1.26</td><td>1.34</td><td>3.06</td><td>1.37</td><td>1.11</td></tr><tr><td>SALMONN (Tang et al., 2023)</td><td>1.99</td><td>1.64</td><td>1.78</td><td>3.50</td><td>1.84</td><td>2.88</td><td>2.27</td><td>2.29</td><td>3.86</td><td>2.59</td><td>2.15</td><td>2.23</td></tr><tr><td>Qwen-Audio (Chu et al., 2023)</td><td>1.36</td><td>1.04</td><td>1.28</td><td>1.04</td><td>1.06</td><td>1.48</td><td>1.08</td><td>1.32</td><td>2.49</td><td>2.65</td><td>1.42</td><td>1.18</td></tr><tr><td>Qwen2-Audio (Chu et al., 2024)</td><td>3.46</td><td>4.18</td><td>2.71</td><td>4.43</td><td>3.73</td><td>3.06</td><td>3.29</td><td>2.98</td><td>3.93</td><td>3.46</td><td>3.81</td><td>3.98</td></tr></table>",
921
+ "bbox": [
922
+ 183,
923
+ 132,
924
+ 815,
925
+ 289
926
+ ],
927
+ "page_idx": 8
928
+ },
929
+ {
930
+ "type": "text",
931
+ "text": "relevant and natural, mentioning the attribute, but could be improved. 5: Contextually relevant, smooth, natural, and accurately addresses the attribute. We have included all the evaluated prompt templates in supplementary materials. Please refer to the supplementary materials for more details.",
932
+ "bbox": [
933
+ 169,
934
+ 303,
935
+ 823,
936
+ 347
937
+ ],
938
+ "page_idx": 8
939
+ },
940
+ {
941
+ "type": "text",
942
+ "text": "4.3 SPOKEN DIALOGUE SYSTEM",
943
+ "text_level": 1,
944
+ "bbox": [
945
+ 171,
946
+ 364,
947
+ 415,
948
+ 378
949
+ ],
950
+ "page_idx": 8
951
+ },
952
+ {
953
+ "type": "text",
954
+ "text": "In order to build a comprehensive benchmark, we evaluated two main types of spoken dialogue system approaches: (1) ASR-based dialogue systems (e.g., FunAudioLLM (Fang et al., 2024)) and (2) direct spoken dialogue systems $^5$ (e.g., Audio-Flamingo (Kong et al., 2024), SALMONN (Tang et al., 2023), Qwen-Audio Instruct (Chu et al., 2023), and Qwen2-Audio Instruct (Chu et al., 2024)). Figure 2 presents a comparative analysis using four metrics across various attributes on the VoxDialogue dataset. Based on the experimental results, we gained the following key insights:",
955
+ "bbox": [
956
+ 169,
957
+ 391,
958
+ 823,
959
+ 476
960
+ ],
961
+ "page_idx": 8
962
+ },
963
+ {
964
+ "type": "text",
965
+ "text": "ASR-based systems excel in context-sensitive tasks. In attributes that can be inferred through context understanding, ASR-based systems (such as FunAudioLLM) show significant advantages. ASR systems first transcribe speech into text and then process it, allowing them to more effectively capture and analyze the context of a conversation. For example, in attributes like Emotion and Speaker Information(Age, Gender, Accent, Language), FunAudioLLM consistently outperforms direct spoken dialogue systems. The results from BLEU, ROUGE-L, METEOR, and BERTScore metrics indicate that FunAudioLLM achieves higher scores, such as in emotion (3.22 BLEU, 14.93 ROUGE-L, 19.31 METEOR, 86.92 BERTScore). This proves that most current direct spoken dialogue systems lack adequate context understanding capabilities and are far weaker than text-based large language models. Additionally, although ASR-based models may have limitations in understanding acoustic information, comparing them provides a valuable performance reference, representing the upper bound performance without the integration of acoustic information.",
966
+ "bbox": [
967
+ 169,
968
+ 481,
969
+ 826,
970
+ 650
971
+ ],
972
+ "page_idx": 8
973
+ },
974
+ {
975
+ "type": "text",
976
+ "text": "Advantages of direct spoken dialogue systems in acoustic attribute processing. Although ASR-based systems can leverage the strong context understanding capabilities of large language models, they struggle with attributes that heavily rely on sound understanding (such as volume, fidelity, speed, and other paralinguistic information). ASR-based methods face challenges when addressing dialogue tasks related to these attributes. In contrast, direct systems like Qwen2-Audio excel in tasks involving these acoustic properties. The results show that Qwen2-Audio outperforms other systems in these categories. For instance, Qwen2-Audio achieved the highest scores for volume (4.56 BLEU, 16.13 ROUGE-L, 22.82 METEOR, and 87.99 BERTScore), demonstrating its ability to handle loud and soft speech variations more effectively. Similarly, fidelity is another strong point for direct dialogue systems. Qwen2-Audio's excellent performance in handling varying fidelity levels (3.38 BLEU, 14.36 ROUGE-L, 12.78 METEOR, 85.66 BERTScore) confirms that spoken dialogue tasks, which heavily rely on acoustic information beyond words.",
977
+ "bbox": [
978
+ 169,
979
+ 655,
980
+ 826,
981
+ 824
982
+ ],
983
+ "page_idx": 8
984
+ },
985
+ {
986
+ "type": "text",
987
+ "text": "4.4 QUALITATIVE COMPARISON",
988
+ "text_level": 1,
989
+ "bbox": [
990
+ 171,
991
+ 840,
992
+ 410,
993
+ 857
994
+ ],
995
+ "page_idx": 8
996
+ },
997
+ {
998
+ "type": "text",
999
+ "text": "Inspired by Yang et al. (2024a), we also attempted to use GPT-4 (OpenAI, 2024b) for evaluation, focusing on whether the responses exhibit the specific attribute characteristics and whether they",
1000
+ "bbox": [
1001
+ 169,
1002
+ 867,
1003
+ 823,
1004
+ 898
1005
+ ],
1006
+ "page_idx": 8
1007
+ },
1008
+ {
1009
+ "type": "header",
1010
+ "text": "Published as a conference paper at ICLR 2025",
1011
+ "bbox": [
1012
+ 171,
1013
+ 32,
1014
+ 478,
1015
+ 47
1016
+ ],
1017
+ "page_idx": 8
1018
+ },
1019
+ {
1020
+ "type": "page_footnote",
1021
+ "text": "5 All models used in the evaluation are -chat version.",
1022
+ "bbox": [
1023
+ 189,
1024
+ 909,
1025
+ 504,
1026
+ 922
1027
+ ],
1028
+ "page_idx": 8
1029
+ },
1030
+ {
1031
+ "type": "page_number",
1032
+ "text": "9",
1033
+ "bbox": [
1034
+ 493,
1035
+ 948,
1036
+ 504,
1037
+ 959
1038
+ ],
1039
+ "page_idx": 8
1040
+ },
1041
+ {
1042
+ "type": "text",
1043
+ "text": "provide reasonable replies to the previous context. As shown in Table 4, we present the qualitative testing results of different methods across 12 attributes. Specifically, a score of 3 represents mention of attribute information, 4 represents a reasonable and natural response.",
1044
+ "bbox": [
1045
+ 169,
1046
+ 103,
1047
+ 823,
1048
+ 147
1049
+ ],
1050
+ "page_idx": 9
1051
+ },
1052
+ {
1053
+ "type": "text",
1054
+ "text": "We observed that the conclusions from the qualitative tests largely align with those from the quantitative evaluations. For context-driven attributes (such as speaker information and emotion), ASR-based dialogue models continue to demonstrate the best performance. However, for attributes that are highly dependent on acoustic information (such as speed, fidelity, audio, and music), direct spoken dialogue models like Qwen2-Audio significantly outperform FunAudioLLM, underscoring the importance of developing direct spoken dialogue models.",
1055
+ "bbox": [
1056
+ 169,
1057
+ 152,
1058
+ 826,
1059
+ 238
1060
+ ],
1061
+ "page_idx": 9
1062
+ },
1063
+ {
1064
+ "type": "text",
1065
+ "text": "Additionally, we found that Qwen-Audio often responds with descriptive sentences related to the query, which severely affects its performance. The SALMONN model frequently repeats parts of the query, leading to higher quantitative scores in some attributes (e.g., a BLEUScore of 87.53 for Stress, 0.53 higher than Qwen2-Audio), but its qualitative performance is inferior to Qwen2-Audio (with a GPT-4-based metric score 0.97 lower). This indicates that most current large audio-language models are focused on QA-style interactions, and are not yet well-suited for dialogue-style conversations.",
1066
+ "bbox": [
1067
+ 169,
1068
+ 243,
1069
+ 826,
1070
+ 328
1071
+ ],
1072
+ "page_idx": 9
1073
+ },
1074
+ {
1075
+ "type": "text",
1076
+ "text": "5 ETHICAL DISCUSSION",
1077
+ "text_level": 1,
1078
+ "bbox": [
1079
+ 171,
1080
+ 351,
1081
+ 392,
1082
+ 367
1083
+ ],
1084
+ "page_idx": 9
1085
+ },
1086
+ {
1087
+ "type": "text",
1088
+ "text": "Our dataset incorporates certain attributes that may introduce bias (e.g., gender) as dimensions to evaluate the model's ability to process diverse acoustic information. However, this introduces potential risks of unfairness, such as biased or stereotypical responses. In Appendix C, we outline the fairness challenges faced by spoken dialogue models.",
1089
+ "bbox": [
1090
+ 169,
1091
+ 385,
1092
+ 823,
1093
+ 441
1094
+ ],
1095
+ "page_idx": 9
1096
+ },
1097
+ {
1098
+ "type": "text",
1099
+ "text": "To promote the development of unbiased spoken dialogue systems, we conducted manual filtering of all potentially sensitive data to ensure that the dataset complies with Collins & Clément (2012) and excludes examples that could cause harm due to attribute-related biases. Looking ahead, we are committed to proactively identifying and addressing these challenges, contributing to the creation of fairer and more inclusive spoken dialogue systems. Furthermore, we pledge to continually update this work to advance the development of equitable conversational AI.",
1100
+ "bbox": [
1101
+ 169,
1102
+ 446,
1103
+ 826,
1104
+ 532
1105
+ ],
1106
+ "page_idx": 9
1107
+ },
1108
+ {
1109
+ "type": "text",
1110
+ "text": "6 CONCLUSION",
1111
+ "text_level": 1,
1112
+ "bbox": [
1113
+ 171,
1114
+ 556,
1115
+ 320,
1116
+ 571
1117
+ ],
1118
+ "page_idx": 9
1119
+ },
1120
+ {
1121
+ "type": "text",
1122
+ "text": "In this work, we introduced VoxDialogue, a comprehensive benchmark designed to evaluate spoken dialogue systems' ability to understand information beyond words. By identifying 12 critical attributes tied to acoustic cues such as speech rate, volume, emphasis, and background sounds, we constructed a challenging test set of 4.5K multi-turn dialogue samples. Our experiments demonstrated that while ASR-based systems excel at context understanding and textual interpretation, they fail to capture important acoustic signals that are essential for contextually appropriate responses. In contrast, direct spoken dialogue systems outperform ASR-based models in processing acoustic properties, but their limited ability to understand complex dialogue contexts remains a significant shortcoming. The findings highlight the importance of acoustic information in enhancing the performance of spoken dialogue systems and reveal the current limitations in both ASR-based and direct spoken dialogue models.",
1123
+ "bbox": [
1124
+ 169,
1125
+ 589,
1126
+ 826,
1127
+ 743
1128
+ ],
1129
+ "page_idx": 9
1130
+ },
1131
+ {
1132
+ "type": "text",
1133
+ "text": "REPRODUCIBILITY STATEMENT",
1134
+ "text_level": 1,
1135
+ "bbox": [
1136
+ 171,
1137
+ 767,
1138
+ 439,
1139
+ 782
1140
+ ],
1141
+ "page_idx": 9
1142
+ },
1143
+ {
1144
+ "type": "text",
1145
+ "text": "All of our data, code, and model weights will be open-sourced.",
1146
+ "bbox": [
1147
+ 171,
1148
+ 800,
1149
+ 586,
1150
+ 816
1151
+ ],
1152
+ "page_idx": 9
1153
+ },
1154
+ {
1155
+ "type": "list",
1156
+ "sub_type": "text",
1157
+ "list_items": [
1158
+ "- Section 3 provides detailed instructions on the construction of VoxDialogue, including a comprehensive list of all relevant open-source resources.",
1159
+ "- Section 4.1 outlines the detailed task definitions.",
1160
+ "- Section 4.2 elaborates on the evaluation metrics and specific details.",
1161
+ "- All of our prompt templates are included in the Supplementary Material."
1162
+ ],
1163
+ "bbox": [
1164
+ 215,
1165
+ 828,
1166
+ 823,
1167
+ 922
1168
+ ],
1169
+ "page_idx": 9
1170
+ },
1171
+ {
1172
+ "type": "header",
1173
+ "text": "Published as a conference paper at ICLR 2025",
1174
+ "bbox": [
1175
+ 171,
1176
+ 32,
1177
+ 478,
1178
+ 47
1179
+ ],
1180
+ "page_idx": 9
1181
+ },
1182
+ {
1183
+ "type": "page_number",
1184
+ "text": "10",
1185
+ "bbox": [
1186
+ 490,
1187
+ 946,
1188
+ 509,
1189
+ 960
1190
+ ],
1191
+ "page_idx": 9
1192
+ },
1193
+ {
1194
+ "type": "text",
1195
+ "text": "ACKNOWLEDGMENTS",
1196
+ "text_level": 1,
1197
+ "bbox": [
1198
+ 171,
1199
+ 102,
1200
+ 356,
1201
+ 118
1202
+ ],
1203
+ "page_idx": 10
1204
+ },
1205
+ {
1206
+ "type": "ref_text",
1207
+ "text": "This work was supported in part by National Natural Science Foundation of China under Grant No. 62222211 and No.624B2128.",
1208
+ "bbox": [
1209
+ 171,
1210
+ 132,
1211
+ 823,
1212
+ 161
1213
+ ],
1214
+ "page_idx": 10
1215
+ },
1216
+ {
1217
+ "type": "text",
1218
+ "text": "REFERENCES",
1219
+ "text_level": 1,
1220
+ "bbox": [
1221
+ 171,
1222
+ 181,
1223
+ 287,
1224
+ 196
1225
+ ],
1226
+ "page_idx": 10
1227
+ },
1228
+ {
1229
+ "type": "list",
1230
+ "sub_type": "ref_text",
1231
+ "list_items": [
1232
+ "Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, and Zhizheng Wu. Sd-eval: A benchmark dataset for spoken dialogue understanding beyond words. arXiv preprint arXiv:2406.13340, 2024.",
1233
+ "Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65-72, 2005.",
1234
+ "Hervé Bredin. pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe. In Proc. INTERSPEECH 2023, 2023.",
1235
+ "Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 5016-5026, 2018.",
1236
+ "Xize Cheng, Rongjie Huang, Linjun Li, Tao Jin, Zehan Wang, Aoxiong Yin, Minglei Li, Xinyu Duan, Zhou Zhao, et al. Transface: Unit-based audio-visual speech synthesizer for talking head translation. arXiv preprint arXiv:2312.15197, 2023a.",
1237
+ "Xize Cheng, Tao Jin, Rongjie Huang, Linjun Li, Wang Lin, Zehan Wang, Ye Wang, Huadai Liu, Aoxiong Yin, and Zhou Zhao. Mixspeech: Cross-modality self-learning with audio-visual stream mixup for visual speech translation and recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15735–15745, 2023b.",
1238
+ "Xize Cheng, Tao Jin, Linjun Li, Wang Lin, Xinyu Duan, and Zhou Zhao. Opensr: Open-modality speech recognition via maintaining multi-modality alignment. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6592-6607, 2023c.",
1239
+ "Xize Cheng, Dongjie Fu, Xiaoda Yang, Minghui Fang, Ruofan Hu, Jingyu Lu, Bai Jionghao, Zehan Wang, Shengpeng Ji, Rongjie Huang, et al. Omnichat: Enhancing spoken dialogue systems with scalable synthetic data for diverse scenarios. arXiv preprint arXiv:2501.01384, 2025.",
1240
+ "Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shiliang Zhang, Zhijie Yan, Chang Zhou, and Jingren Zhou. Qwen-audio: Advancing universal audio understanding via unified large-scale audio-language models. arXiv preprint arXiv:2311.07919, 2023.",
1241
+ "Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, et al. Qwen2-audio technical report. arXiv preprint arXiv:2407.10759, 2024.",
1242
+ "Katherine A Collins and Richard Clément. Language and prejudice: Direct and moderated effects. Journal of Language and Social Psychology, 31(4):376-396, 2012.",
1243
+ "Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, et al. Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens. arXiv preprint arXiv:2407.05407, 2024.",
1244
+ "Solène Evain, Ha Nguyen, Hang Le, Marcely Zanon Boito, Salima Mdhaffar, Sina Alisamir, Ziyi Tong, Natalia Tomashenko, Marco Dinarelli, Titouan Parcollet, et al. Lebenchmark: A reproducible framework for assessing self-supervised representation learning from speech. In INTER-SPEECH 2021: Conference of the International Speech Communication Association, 2021.",
1245
+ "Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, and Yang Feng. Llama-omni: Seamless speech interaction with large language models. arXiv preprint arXiv:2409.06666, 2024."
1246
+ ],
1247
+ "bbox": [
1248
+ 171,
1249
+ 205,
1250
+ 825,
1251
+ 924
1252
+ ],
1253
+ "page_idx": 10
1254
+ },
1255
+ {
1256
+ "type": "header",
1257
+ "text": "Published as a conference paper at ICLR 2025",
1258
+ "bbox": [
1259
+ 171,
1260
+ 32,
1261
+ 478,
1262
+ 47
1263
+ ],
1264
+ "page_idx": 10
1265
+ },
1266
+ {
1267
+ "type": "page_number",
1268
+ "text": "11",
1269
+ "bbox": [
1270
+ 490,
1271
+ 948,
1272
+ 506,
1273
+ 959
1274
+ ],
1275
+ "page_idx": 10
1276
+ },
1277
+ {
1278
+ "type": "list",
1279
+ "sub_type": "ref_text",
1280
+ "list_items": [
1281
+ "Dongjie Fu, Xize Cheng, Xiaoda Yang, Wang Hanting, Zhou Zhao, and Tao Jin. Boosting speech recognition robustness to modality-distortion with contrast-augmented prompts. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 3838-3847, 2024.",
1282
+ "Khaled Hechmi, Trung Ngo Trong, Ville Hautamäki, and Tomi Kinnunen. Voxceleb enrichment for age and gender recognition. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 687-693. IEEE, 2021.",
1283
+ "Matthew B Hoy. Alexa, siri, cortana, and more: an introduction to voice assistants. Medical reference services quarterly, 37(1):81-88, 2018.",
1284
+ "Rongjie Huang, Huadai Liu, Xize Cheng, Yi Ren, Linjun Li, Zhenhui Ye, Jinzheng He, Lichao Zhang, Jinglin Liu, Xiang Yin, et al. Av-transpeech: Audio-visual robust speech-to-speech translation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8590–8604, 2023.",
1285
+ "Shengpeng Ji, Yifu Chen, Minghui Fang, Jialong Zuo, Jingyu Lu, Hanting Wang, Ziyue Jiang, Long Zhou, Shujie Liu, Xize Cheng, et al. Wavchat: A survey of spoken dialogue models. arXiv preprint arXiv:2411.13577, 2024a.",
1286
+ "Shengpeng Ji, Jialong Zuo, Wen Wang, Minghui Fang, Siqi Zheng, Qian Chen, Ziyue Jiang, Hai Huang, Zehan Wang, Xize Cheng, et al. Controlspeech: Towards simultaneous zero-shot speaker cloning and zero-shot language style control with decoupled codec. arXiv preprint arXiv:2406.01205, 2024b.",
1287
+ "Zhifeng Kong, Arushi Goel, Rohan Badlani, Wei Ping, Rafael Valle, and Bryan Catanzaro. Audio flamingo: A novel audio language model with few-shot learning and dialogue abilities. arXiv preprint arXiv:2402.01831, 2024.",
1288
+ "Keon Lee, Kyumin Park, and Daeyoung Kim. Dailytalk: Spoken dialogue dataset for conversational text-to-speech. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5. IEEE, 2023.",
1289
+ "Songju Lei, Xize Cheng, Mengjiao Lyu, Jianqiao Hu, Jintao Tan, Runlin Liu, Lingyu Xiong, Tao Jin, Xiandong Li, and Zhou Zhao. Uni-dubbing: Zero-shot speech synthesis from visual articulation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10082-10099, 2024.",
1290
+ "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 986-995, 2017.",
1291
+ "Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74-81, 2004.",
1292
+ "Guan-Ting Lin, Cheng-Han Chiang, and Hung-yi Lee. Advancing large language models to capture varied speaking styles and respond properly in spoken conversations. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6626-6642, Bangkok, Thailand, August 2024a. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.358.",
1293
+ "Guan-Ting Lin, Cheng-Han Chiang, and Hung-yi Lee. Advancing large language models to capture varied speaking styles and respond properly in spoken conversations. arXiv preprint arXiv:2402.12786, 2024b.",
1294
+ "Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. Does gender matter? towards fairness in dialogue systems. arXiv preprint arXiv:1910.10486, 2019.",
1295
+ "Xubo Liu, Egor Lakomkin, Konstantinos Vougioukas, Pingchuan Ma, Honglie Chen, Ruiming Xie, Morrie Doulaty, Niko Moritz, Jachym Kolar, Stavros Petridis, et al. Synthvsr: Scaling up visual speech recognition with synthetic supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18806-18815, 2023."
1296
+ ],
1297
+ "bbox": [
1298
+ 171,
1299
+ 102,
1300
+ 825,
1301
+ 924
1302
+ ],
1303
+ "page_idx": 11
1304
+ },
1305
+ {
1306
+ "type": "header",
1307
+ "text": "Published as a conference paper at ICLR 2025",
1308
+ "bbox": [
1309
+ 171,
1310
+ 32,
1311
+ 478,
1312
+ 47
1313
+ ],
1314
+ "page_idx": 11
1315
+ },
1316
+ {
1317
+ "type": "page_number",
1318
+ "text": "12",
1319
+ "bbox": [
1320
+ 490,
1321
+ 946,
1322
+ 508,
1323
+ 959
1324
+ ],
1325
+ "page_idx": 11
1326
+ },
1327
+ {
1328
+ "type": "list",
1329
+ "sub_type": "ref_text",
1330
+ "list_items": [
1331
+ "Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, and Xie Chen. emotion2vec: Self-supervised pre-training for speech emotion representation. arXiv preprint arXiv:2312.15185, 2023.",
1332
+ "OpenAI. Gpt-4o system card. https://cdn.openai.com/gpt-4o-system-card.pdf, 2024a.",
1333
+ "OpenAI. Chatgpt can now see, hear, and speak. https://openai.com/index/chatgpt-can-now-see-hear-and-speak/, 2024b.",
1334
+ "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311-318, 2002.",
1335
+ "Alexis Plaqet and Hervé Bredin. Powerset multi-class cross entropy loss for neural speaker diarization. In Proc. INTERSPEECH 2023, 2023.",
1336
+ "Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International conference on machine learning, pp. 28492-28518. PMLR, 2023.",
1337
+ "Suwon Shon, Ankita Pasad, Felix Wu, Pablo Brusco, Yoav Artzi, Karen Livescu, and Kyu J Han. Slue: New benchmark tasks for spoken language understanding evaluation on natural speech. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7927-7931. IEEE, 2022.",
1338
+ "Shuzheng Si, Wentao Ma, Haoyu Gao, Yuchuan Wu, Ting-En Lin, Yinpei Dai, Hangyu Li, Rui Yan, Fei Huang, and Yongbin Li. Spokenwoz: A large-scale speech-text benchmark for spoken task-oriented dialogue agents. Advances in Neural Information Processing Systems, 36, 2024.",
1339
+ "Tongyi SpeechTeam. Funaudiollm: Voice understanding and generation foundation models for natural interaction between humans and llms. arXiv preprint arXiv:2407.04051, 2024.",
1340
+ "Hsuan Su, Rebecca Qian, Chinnadhurai Sankar, Shahin Shayandeh, Shang-Tse Chen, Hung-yi Lee, and Daniel M Bikel. Step by step to fairness: Attributing societal bias in task-oriented dialogue systems. arXiv preprint arXiv:2311.06513, 2023.",
1341
+ "Changli Tang, Wenyi Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, and Chao Zhang. Salmonn: Towards generic hearing abilities for large language models. arXiv preprint arXiv:2310.13289, 2023.",
1342
+ "Naohiro Tawara, Atsunori Ogawa, Yuki Kitagishi, and Hosana Kamiyama. Age-vox-celeb: Multi-modal corpus for facial and speech estimation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6963-6967. IEEE, 2021.",
1343
+ "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.",
1344
+ "Zhifei Xie and Changqiao Wu. Mini-omni: Language models can hear, talk while thinking in streaming. arXiv preprint arXiv:2408.16725, 2024.",
1345
+ "Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, and Jingren Zhou. AIR-bench: Benchmarking large audio-language models via generative comprehension. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1979–1998, Bangkok, Thailand, August 2024a. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.109.",
1346
+ "Shu Wen Yang, Po Han Chi, Yung Sung Chuang, Cheng I Jeff Lai, Kushal Lakhotia, Yist Y Lin, Andy T Liu, Jiatong Shi, Xuankai Chang, Guan Ting Lin, et al. Superb: Speech processing universal performance benchmark. In 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021, pp. 3161-3165. International Speech Communication Association, 2021."
1347
+ ],
1348
+ "bbox": [
1349
+ 171,
1350
+ 102,
1351
+ 825,
1352
+ 922
1353
+ ],
1354
+ "page_idx": 12
1355
+ },
1356
+ {
1357
+ "type": "header",
1358
+ "text": "Published as a conference paper at ICLR 2025",
1359
+ "bbox": [
1360
+ 171,
1361
+ 32,
1362
+ 478,
1363
+ 47
1364
+ ],
1365
+ "page_idx": 12
1366
+ },
1367
+ {
1368
+ "type": "page_number",
1369
+ "text": "13",
1370
+ "bbox": [
1371
+ 490,
1372
+ 946,
1373
+ 508,
1374
+ 959
1375
+ ],
1376
+ "page_idx": 12
1377
+ },
1378
+ {
1379
+ "type": "list",
1380
+ "sub_type": "ref_text",
1381
+ "list_items": [
1382
+ "Xiaoda Yang, Xize Cheng, Dongjie Fu, Minghui Fang, Jialung Zuo, Shengpeng Ji, Zhou Zhao, and Jin Tao. Synctalklip: Highly synchronized lip-readable speaker generation with multi-task learning. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 8149-8158, 2024b.",
1383
+ "Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. arXiv preprint arXiv:2305.11000, 2023.",
1384
+ "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675, 2019."
1385
+ ],
1386
+ "bbox": [
1387
+ 171,
1388
+ 102,
1389
+ 825,
1390
+ 250
1391
+ ],
1392
+ "page_idx": 13
1393
+ },
1394
+ {
1395
+ "type": "header",
1396
+ "text": "Published as a conference paper at ICLR 2025",
1397
+ "bbox": [
1398
+ 171,
1399
+ 32,
1400
+ 478,
1401
+ 47
1402
+ ],
1403
+ "page_idx": 13
1404
+ },
1405
+ {
1406
+ "type": "page_number",
1407
+ "text": "14",
1408
+ "bbox": [
1409
+ 490,
1410
+ 946,
1411
+ 509,
1412
+ 960
1413
+ ],
1414
+ "page_idx": 13
1415
+ },
1416
+ {
1417
+ "type": "table",
1418
+ "img_path": "images/c58087110cc1b9d4ea3ccbeabae0645e8791bc30a35ca52d1ff0378d9bb86a56.jpg",
1419
+ "table_caption": [
1420
+ "Table 5: Detailed Comparison of Spoken Dialogue Systems across Various Metrics",
1421
+ "(a) BLEU Scores"
1422
+ ],
1423
+ "table_footnote": [],
1424
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">Speaker Info</td><td colspan=\"6\">Paralinguistic Info</td><td colspan=\"2\">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>2.53</td><td>2.66</td><td>3.34</td><td>2.72</td><td>3.22</td><td>4.20</td><td>2.77</td><td>2.65</td><td>3.58</td><td>2.37</td><td>3.34</td><td>3.24</td></tr><tr><td>Audio-Flamingo</td><td>2.08</td><td>2.40</td><td>2.83</td><td>0.01</td><td>2.74</td><td>3.95</td><td>2.70</td><td>2.50</td><td>2.58</td><td>1.41</td><td>3.38</td><td>2.81</td></tr><tr><td>Qwen-Audio</td><td>2.26</td><td>2.56</td><td>3.05</td><td>1.74</td><td>3.01</td><td>3.78</td><td>2.61</td><td>0.54</td><td>3.02</td><td>2.85</td><td>3.60</td><td>2.87</td></tr><tr><td>SALMONN</td><td>2.29</td><td>2.35</td><td>2.88</td><td>3.09</td><td>2.88</td><td>4.44</td><td>2.73</td><td>2.82</td><td>2.33</td><td>2.04</td><td>3.55</td><td>2.86</td></tr><tr><td>Qwen2-Audio</td><td>2.22</td><td>2.52</td><td>3.20</td><td>3.18</td><td>3.11</td><td>4.56</td><td>2.92</td><td>3.38</td><td>2.93</td><td>2.10</td><td>2.97</td><td>2.97</td></tr></table>",
1425
+ "bbox": [
1426
+ 173,
1427
+ 146,
1428
+ 861,
1429
+ 275
1430
+ ],
1431
+ "page_idx": 14
1432
+ },
1433
+ {
1434
+ "type": "table",
1435
+ "img_path": "images/0ed890603a82920ed92e6d3c26d68c7dac103ea434c205d0f42da33d9a1f6c1c.jpg",
1436
+ "table_caption": [
1437
+ "(b) ROUGE-L Scores"
1438
+ ],
1439
+ "table_footnote": [],
1440
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">Speaker Info</td><td colspan=\"6\">Paralinguistic Info</td><td colspan=\"2\">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>12.15</td><td>12.95</td><td>15.07</td><td>15.88</td><td>14.93</td><td>8.28</td><td>7.97</td><td>4.47</td><td>13.49</td><td>10.67</td><td>12.01</td><td>11.97</td></tr><tr><td>Audio-Flamingo</td><td>6.12</td><td>6.15</td><td>6.62</td><td>0.03</td><td>5.78</td><td>5.48</td><td>7.67</td><td>7.57</td><td>5.12</td><td>7.41</td><td>5.91</td><td>7.88</td></tr><tr><td>Qwen-Audio</td><td>8.34</td><td>9.62</td><td>7.12</td><td>12.09</td><td>8.24</td><td>0.71</td><td>6.61</td><td>14.36</td><td>7.76</td><td>12.36</td><td>7.29</td><td>9.01</td></tr><tr><td>SALMONN</td><td>11.52</td><td>11.43</td><td>10.51</td><td>14.80</td><td>11.81</td><td>13.30</td><td>10.56</td><td>10.22</td><td>15.71</td><td>11.01</td><td>10.05</td><td>10.51</td></tr><tr><td>Qwen2-Audio</td><td>11.51</td><td>11.44</td><td>13.18</td><td>15.66</td><td>14.18</td><td>23.13</td><td>17.34</td><td>9.58</td><td>13.45</td><td>11.36</td><td>12.23</td><td>12.18</td></tr></table>",
1441
+ "bbox": [
1442
+ 173,
1443
+ 305,
1444
+ 856,
1445
+ 433
1446
+ ],
1447
+ "page_idx": 14
1448
+ },
1449
+ {
1450
+ "type": "table",
1451
+ "img_path": "images/dae69280f52789adb78a785e5e33a39ea6f47b186f4da1c13d50baee20abdb9f.jpg",
1452
+ "table_caption": [
1453
+ "(c) METEOR Scores"
1454
+ ],
1455
+ "table_footnote": [],
1456
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">Speaker Info</td><td colspan=\"6\">Paralinguistic Info</td><td colspan=\"2\">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>16.89</td><td>20.12</td><td>21.03</td><td>15.21</td><td>19.31</td><td>10.19</td><td>9.83</td><td>8.16</td><td>16.95</td><td>10.31</td><td>12.91</td><td>12.42</td></tr><tr><td>Audio-Flamingo</td><td>8.23</td><td>7.79</td><td>10.03</td><td>0.25</td><td>9.17</td><td>8.31</td><td>8.69</td><td>11.04</td><td>8.12</td><td>7.88</td><td>9.93</td><td>11.01</td></tr><tr><td>Qwen-Audio</td><td>12.87</td><td>14.16</td><td>12.92</td><td>11.06</td><td>13.12</td><td>1.41</td><td>5.28</td><td>6.11</td><td>10.92</td><td>21.41</td><td>11.92</td><td>11.68</td></tr><tr><td>SALMONN</td><td>11.02</td><td>10.81</td><td>11.21</td><td>10.35</td><td>11.13</td><td>11.78</td><td>10.14</td><td>10.17</td><td>11.84</td><td>9.03</td><td>11.18</td><td>11.08</td></tr><tr><td>Qwen2-Audio</td><td>12.96</td><td>16.15</td><td>18.24</td><td>14.37</td><td>17.05</td><td>22.11</td><td>19.08</td><td>12.78</td><td>14.01</td><td>13.11</td><td>13.21</td><td>13.39</td></tr></table>",
1457
+ "bbox": [
1458
+ 173,
1459
+ 463,
1460
+ 856,
1461
+ 592
1462
+ ],
1463
+ "page_idx": 14
1464
+ },
1465
+ {
1466
+ "type": "table",
1467
+ "img_path": "images/4d60da50e3cb79b4435a29e6d325ae3daaf7b38b8bb92a6da4f7bae849a2467a.jpg",
1468
+ "table_caption": [
1469
+ "(d) BERTScore"
1470
+ ],
1471
+ "table_footnote": [],
1472
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">Speaker Info</td><td colspan=\"6\">Paralinguistic Info</td><td colspan=\"2\">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>86.14</td><td>86.65</td><td>87.24</td><td>86.97</td><td>86.90</td><td>84.87</td><td>85.03</td><td>84.36</td><td>87.51</td><td>83.79</td><td>86.02</td><td>86.19</td></tr><tr><td>Audio-Flamingo</td><td>83.10</td><td>83.84</td><td>83.86</td><td>75.28</td><td>83.78</td><td>84.91</td><td>84.71</td><td>84.81</td><td>83.78</td><td>82.89</td><td>83.78</td><td>84.74</td></tr><tr><td>Qwen-Audio</td><td>83.40</td><td>84.46</td><td>83.84</td><td>85.79</td><td>84.34</td><td>85.53</td><td>85.34</td><td>87.12</td><td>79.55</td><td>83.85</td><td>83.95</td><td>84.14</td></tr><tr><td>SALMONN</td><td>84.60</td><td>86.75</td><td>86.65</td><td>86.44</td><td>86.05</td><td>87.27</td><td>86.06</td><td>86.74</td><td>87.53</td><td>84.92</td><td>85.63</td><td>86.06</td></tr><tr><td>Qwen2-Audio</td><td>85.59</td><td>85.80</td><td>86.70</td><td>86.50</td><td>86.65</td><td>88.00</td><td>87.30</td><td>85.66</td><td>87.08</td><td>84.51</td><td>87.22</td><td>87.19</td></tr></table>",
1473
+ "bbox": [
1474
+ 173,
1475
+ 622,
1476
+ 856,
1477
+ 750
1478
+ ],
1479
+ "page_idx": 14
1480
+ },
1481
+ {
1482
+ "type": "text",
1483
+ "text": "A MORE EXPERIMENT RESULTS",
1484
+ "text_level": 1,
1485
+ "bbox": [
1486
+ 171,
1487
+ 772,
1488
+ 460,
1489
+ 787
1490
+ ],
1491
+ "page_idx": 14
1492
+ },
1493
+ {
1494
+ "type": "text",
1495
+ "text": "A.1 THE DETAILED PERFORMANCE COMPARISON",
1496
+ "text_level": 1,
1497
+ "bbox": [
1498
+ 171,
1499
+ 804,
1500
+ 527,
1501
+ 816
1502
+ ],
1503
+ "page_idx": 14
1504
+ },
1505
+ {
1506
+ "type": "text",
1507
+ "text": "For comparison, the detailed performance corresponding to Figure 2 is presented in Table 5.",
1508
+ "bbox": [
1509
+ 171,
1510
+ 829,
1511
+ 776,
1512
+ 844
1513
+ ],
1514
+ "page_idx": 14
1515
+ },
1516
+ {
1517
+ "type": "text",
1518
+ "text": "B LIMITATION",
1519
+ "text_level": 1,
1520
+ "bbox": [
1521
+ 171,
1522
+ 864,
1523
+ 310,
1524
+ 878
1525
+ ],
1526
+ "page_idx": 14
1527
+ },
1528
+ {
1529
+ "type": "text",
1530
+ "text": "Our work heavily relies on synthetic datasets. Although prior research (Liu et al., 2023) has shown that synthetic data can be effectively used for training and evaluation, a domain gap persists between",
1531
+ "bbox": [
1532
+ 171,
1533
+ 895,
1534
+ 823,
1535
+ 925
1536
+ ],
1537
+ "page_idx": 14
1538
+ },
1539
+ {
1540
+ "type": "header",
1541
+ "text": "Published as a conference paper at ICLR 2025",
1542
+ "bbox": [
1543
+ 173,
1544
+ 32,
1545
+ 478,
1546
+ 47
1547
+ ],
1548
+ "page_idx": 14
1549
+ },
1550
+ {
1551
+ "type": "page_number",
1552
+ "text": "15",
1553
+ "bbox": [
1554
+ 490,
1555
+ 946,
1556
+ 508,
1557
+ 959
1558
+ ],
1559
+ "page_idx": 14
1560
+ },
1561
+ {
1562
+ "type": "text",
1563
+ "text": "synthetic and real-world data. This gap may affect the generalization of models trained on synthetic data when applied to real-world dialogue scenarios.",
1564
+ "bbox": [
1565
+ 169,
1566
+ 103,
1567
+ 823,
1568
+ 133
1569
+ ],
1570
+ "page_idx": 15
1571
+ },
1572
+ {
1573
+ "type": "text",
1574
+ "text": "However, since our focus is on understanding acoustic information, synthetic data proves particularly useful in simulating various acoustic cues found in real conversational settings. Additionally, the synthetic dataset offers more diverse and controllable dialogue content, making it sufficient for evaluating whether spoken dialogue systems can understand information beyond text.",
1575
+ "bbox": [
1576
+ 169,
1577
+ 138,
1578
+ 826,
1579
+ 196
1580
+ ],
1581
+ "page_idx": 15
1582
+ },
1583
+ {
1584
+ "type": "text",
1585
+ "text": "To properly assess the performance of dialogue systems in real-world scenarios, it is crucial to use datasets based on authentic conversational environments. We believe that constructing a separate real-world dialogue evaluation benchmark, independent of our work, would be more effective in evaluating spoken dialogue systems' performance in real scenarios than using a single dataset to assess both acoustic information comprehension and real-world dialogue capabilities.",
1586
+ "bbox": [
1587
+ 169,
1588
+ 200,
1589
+ 826,
1590
+ 273
1591
+ ],
1592
+ "page_idx": 15
1593
+ },
1594
+ {
1595
+ "type": "text",
1596
+ "text": "C ETHICAL DISCUSSIONS ON SPOKEN DIALOGUE SYSTEMS",
1597
+ "text_level": 1,
1598
+ "bbox": [
1599
+ 169,
1600
+ 292,
1601
+ 689,
1602
+ 308
1603
+ ],
1604
+ "page_idx": 15
1605
+ },
1606
+ {
1607
+ "type": "text",
1608
+ "text": "C.1 FAIRNESS CHALLENGES IN SPOKEN CONVERSATION.",
1609
+ "text_level": 1,
1610
+ "bbox": [
1611
+ 169,
1612
+ 323,
1613
+ 588,
1614
+ 338
1615
+ ],
1616
+ "page_idx": 15
1617
+ },
1618
+ {
1619
+ "type": "text",
1620
+ "text": "Ensuring fairness in spoken dialogue systems involves several challenges, particularly when addressing attributes like gender:",
1621
+ "bbox": [
1622
+ 169,
1623
+ 349,
1624
+ 823,
1625
+ 378
1626
+ ],
1627
+ "page_idx": 15
1628
+ },
1629
+ {
1630
+ "type": "text",
1631
+ "text": "I. Difficulty in Identifying Gender Bias: Beyond explicit expressions (e.g., \"sir\" or \"madam\"), implicit biases (e.g., career or family-related topics) (Liu et al., 2019) are deeply embedded in existing large language models, making it difficult to guarantee unbiased responses.",
1632
+ "bbox": [
1633
+ 169,
1634
+ 383,
1635
+ 823,
1636
+ 429
1637
+ ],
1638
+ "page_idx": 15
1639
+ },
1640
+ {
1641
+ "type": "text",
1642
+ "text": "II. Fairness and Attribute Understanding: While understanding gender attributes can enhance personalization and conversational relevance, over-reliance may reinforce stereotypes. Conversely, completely eliminating gender considerations could limit the model's ability to provide contextually appropriate responses in scenarios where gender information is explicitly relevant. Therefore, an appropriate balance between fairness and attribute understanding should be achieved, ensuring that biases do not cause harm while fostering diversity in responses and improving attribute-specific relevance.",
1643
+ "bbox": [
1644
+ 169,
1645
+ 433,
1646
+ 825,
1647
+ 532
1648
+ ],
1649
+ "page_idx": 15
1650
+ },
1651
+ {
1652
+ "type": "text",
1653
+ "text": "III. Difficulty in Evaluating Bias: Current fairness metrics (Su et al., 2023) often fail to capture nuanced and context-dependent biases in spoken dialogue systems, especially in open-ended and multi-turn conversations.",
1654
+ "bbox": [
1655
+ 169,
1656
+ 537,
1657
+ 826,
1658
+ 582
1659
+ ],
1660
+ "page_idx": 15
1661
+ },
1662
+ {
1663
+ "type": "text",
1664
+ "text": "C.2 MITIGATION STRATEGIES.",
1665
+ "text_level": 1,
1666
+ "bbox": [
1667
+ 169,
1668
+ 597,
1669
+ 401,
1670
+ 612
1671
+ ],
1672
+ "page_idx": 15
1673
+ },
1674
+ {
1675
+ "type": "text",
1676
+ "text": "To address these challenges, we commit to implementing the following measures:",
1677
+ "bbox": [
1678
+ 169,
1679
+ 623,
1680
+ 710,
1681
+ 638
1682
+ ],
1683
+ "page_idx": 15
1684
+ },
1685
+ {
1686
+ "type": "list",
1687
+ "sub_type": "text",
1688
+ "list_items": [
1689
+ "I. Manual Filtering: We conducted manual filtering of all potentially sensitive data to ensure that the dataset complies with Collins & Clément (2012) and excludes examples that could cause harm due to attribute-related biases.",
1690
+ "II. BiasWarnings: Clear disclaimers will be included in the documentation to highlight potential gender biases and encourage developers to consider fairness during model development.",
1691
+ "III. Continuous Dataset Updates: We will continuously update and refine the dataset to address fairness issues. Any subsets or evaluation components found to introduce risks of bias will be removed or adjusted as necessary."
1692
+ ],
1693
+ "bbox": [
1694
+ 169,
1695
+ 643,
1696
+ 823,
1697
+ 772
1698
+ ],
1699
+ "page_idx": 15
1700
+ },
1701
+ {
1702
+ "type": "header",
1703
+ "text": "Published as a conference paper at ICLR 2025",
1704
+ "bbox": [
1705
+ 171,
1706
+ 32,
1707
+ 478,
1708
+ 47
1709
+ ],
1710
+ "page_idx": 15
1711
+ },
1712
+ {
1713
+ "type": "page_number",
1714
+ "text": "16",
1715
+ "bbox": [
1716
+ 490,
1717
+ 946,
1718
+ 509,
1719
+ 960
1720
+ ],
1721
+ "page_idx": 15
1722
+ }
1723
+ ]
2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/0237f953-23bc-44fd-b727-785d584e994b_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/0237f953-23bc-44fd-b727-785d584e994b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69d39ea2145a38096ff9c0599f6d7326fe31d6c11d5d06d977882592a115d2ba
3
+ size 885291
2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/full.md ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VOXDIALOGUE: CAN SPOKEN DIALOGUE SYSTEMS UNDERSTAND INFORMATION BEYOND WORDS?
2
+
3
+ Xize Cheng $^{1*}$ Ruofan Hu $^{1*}$ Xiaoda Yang $^{1}$ Jingyu Lu $^{1}$ Dongjie Fu $^{1}$ Boyang Zhang $^{1}$ Zehan Wang $^{1}$ Shengpeng Ji $^{1}$ Rongjie Huang $^{1}$ Tao Jin $^{1}$ Zhou Zhao $^{1\dagger}$
4
+
5
+ Zhejiang University<sup>1</sup> chengxize@zju.edu.cn
6
+
7
+ Code & Data: https://voxdialogue.github.io/
8
+
9
+ # ABSTRACT
10
+
11
+ With the rapid advancement of large models, voice assistants are gradually acquiring the ability to engage in open-ended daily conversations with humans. However, current spoken dialogue systems often overlook multi-modal information in audio beyond text, such as speech rate, volume, emphasis, and background sounds. Relying solely on automatic speech recognition (ASR) can lead to the loss of valuable auditory cues, thereby weakening the system's ability to generate contextually appropriate responses. To address this limitation, we propose VoxDialogue, a comprehensive benchmark for evaluating the ability of spoken dialogue systems to understand multi-modal information beyond text. Specifically, we have identified 12 attributes highly correlated with acoustic information beyond words and have meticulously designed corresponding spoken dialogue test sets for each attribute, encompassing a total of $4.5\mathrm{K}$ multi-turn spoken dialogue samples. Finally, we evaluated several existing spoken dialogue models, analyzing their performance on the 12 attribute subsets of VoxDialogue. Experiments have shown that in spoken dialogue scenarios, many acoustic cues cannot be conveyed through textual information and must be directly interpreted from the audio input. In contrast, while direct spoken dialogue systems excel at processing acoustic signals, they still face limitations in handling complex dialogue tasks due to their restricted context understanding capabilities.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Voice assistants (Ji et al., 2024a) have rapidly evolved into a focal point of both academic research and industry innovation, aiming to facilitate daily conversations (Li et al., 2017; Lee et al., 2023) and task-oriented dialogues (Budzianowski et al., 2018; Si et al., 2024) with humans. Early iterations relied heavily on automatic speech recognition (ASR) (Cheng et al., 2023b;c; Fu et al., 2024; Lei et al., 2024; Huang et al., 2023), combined with dialogue understanding and state management, to support basic, predefined tasks. However, these systems (Hoy, 2018) were constrained by their limited scope and inability to handle open-ended interactions. The advent of large language models (LLMs) (Touvron et al., 2023) with enhanced understanding and reasoning capabilities has revolutionized voice assistants, enabling them to engage in more dynamic and unrestricted dialogues with users (OpenAI, 2024b). This marks a significant departure from their earlier, more constrained functionalities, opening up new possibilities for human-computer interaction.
16
+
17
+ Yet, despite these advancements, current spoken dialogue systems (Zhang et al., 2023; Xie & Wu, 2024; Fang et al., 2024; Cheng et al., 2025) often overlook the rich multimodal information embedded in audio beyond mere spoken words—such as intonation, volume, rhythm, and background sounds. Relying solely on ASR leads to the omission of valuable auditory cues, diminishing the system's ability to generate contextually appropriate responses. For example, a system might fail to adjust its language to match a user's emotional state or regional accent, such as responding with "Yes, madam" to a female voice or adopting British colloquialisms when detecting a British accent.
18
+
19
+ Table 1: Comparison of spoken language and audio comprehension benchmarks in terms of data types and evaluation dimensions. SL. refers to Spoken Language, while Dlg. indicates whether the benchmark evaluates on dialogue tasks. Aud. represents audio comprehension, and Mus. refers to music comprehension. Speaker Info includes attributes such as age (Age), gender (Gen), accent (Acc), and language (Lan). Paralinguistic Info covers aspects like emotion (Emo), volume (Vol), speech rate (Spd), speech fidelity (Fid), stress (Str), and non-verbal expressions (NVE). Although LeBenchmark includes a small amount of conversational data (29 hours out of 2933 hours), it does not evaluate on the dialogue tasks. Please note that although AirBench can assess spoken language comprehension, its evaluation of conversational ability (AirBench-Chat) is based on text-based interactions and does not address spoken dialogue capabilities.
20
+
21
+ <table><tr><td rowspan="2">Benchmarks</td><td colspan="2">Types</td><td colspan="4">Evaluation Dimensions</td></tr><tr><td>SL.</td><td>Dlg.</td><td>Aud.</td><td>Mus.</td><td>Speaker Info</td><td>Paralinguistic Info</td></tr><tr><td>SUPERB (Yang et al., 2021)</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>✓ (Emo)</td></tr><tr><td>SLUE (Shon et al., 2022)</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>LeBenchmark (Evain et al., 2021)</td><td>��</td><td>X†</td><td>X</td><td>X</td><td>X</td><td>✓ (Emo)</td></tr><tr><td>AF-Dialogue (Kong et al., 2024)</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>X</td></tr><tr><td>AirBench (Yang et al., 2024a)</td><td>X‡</td><td>✓</td><td>✓</td><td>✓</td><td>✓ (Age,Gen)</td><td>✓ (Emo)</td></tr><tr><td>SpokenWOZ (Si et al., 2024)</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>SD-EVAL (Ao et al., 2024)</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>✓ (Age,Gen,Acc)</td><td>✓ (Emo)</td></tr><tr><td>VoxDialogue (ours)</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓ (Age,Gen,Acc,Lan)</td><td>✓ (Emo,Vol,Spd,Fid,Str,NVE)</td></tr></table>
22
+
23
+ To address these limitations, recent research has shifted towards developing multimodal audio-language models that enhance system comprehension of audio inputs. Emotion2Vec (Ma et al., 2023), trained on vast emotional speech data, stands as the first high-quality pre-trained model for emotion recognition. Qwen-Audio 1/2 (Chu et al., 2023; 2024) have been trained on extensive datasets encompassing over 30 audio-related tasks, enabling them to understand various audio types—including speech, audio events, and music. Pushing the envelope further, FunAudioLLM (SpeechTeam, 2024) offers full-scene recognition capabilities, detecting non-verbal sounds like laughter and breathing within speech.
24
+
25
+ As large-scale audio-language models continue to evolve rapidly, the scientific community has increasingly recognized the urgent need for a comprehensive benchmark to effectively evaluate spoken dialogue systems. While some progress has been made, existing benchmarks often exhibit notable shortcomings. For instance, SUPERB (Yang et al., 2021) is the first benchmark specifically designed for spoken language, but it primarily focuses on coarse-grained semantic understanding tasks, overlooking the importance of various acoustic features. Other benchmarks, such as AirBench (Yang et al., 2024a) and Audio-Flamingo (Kong et al., 2024), delve deeply into audio understanding, but their dialogue content is limited to the textual modality, making them unsuitable for evaluating spoken dialogue tasks. SpokenWOZ (Si et al., 2024), though valuable for its real human-computer interaction data, is restricted to task-driven dialogues and lacks detailed fine-grained labels. To address more specific attributes of spoken dialogue, SD-EVAL (Ao et al., 2024) shifts the focus to characteristics like gender, age, accent, and emotion, yet its effectiveness is limited by the use of speech utterances that are not derived from dialogue scenarios.
26
+
27
+ To better benchmark spoken dialogue systems, we analyzed non-textual multimodal acoustic information that may affect dialogue responses, which can be categorized into three main types: speaker information (age, gender, accent, language), paralinguistic information (emotion, volume, speed, fidelity, stress, and various non-verbal expressions), and background sounds (audio and music). In real-world dialogue scenarios, it is crucial to capture not only the semantic content of the speech but also these acoustic cues to generate more appropriate responses. For example, determining the speaker's age from their vocal tone can help select a suitable form of address. We designed a tailored spoken dialogue synthesis pipeline for each attribute to ensure that the synthesized dialogue data aligns accurately with the corresponding attribute. Leveraging the strong inference capabilities of large language models (LLMs) and high-fidelity text-to-speech (TTS) synthesis (Ji et al., 2024b; Du et al., 2024), we constructed the VoxDialogue benchmark, comprising 12 dialogue scenarios specifically tailored to different acoustic attributes. As shown in Figure 1, to the best of our knowledge, this is the most comprehensive work focusing on acoustic information in spoken dialogue
28
+
29
+ benchmarks. Based on VoxDialogue, we evaluated several existing spoken dialogue systems, comparing the performance of ASR-based dialogue systems and direct dialogue systems across various acoustic-related tasks. The results demonstrate that ASR-based methods are limited in their ability to understand the diverse acoustic attributes present in spoken dialogues, highlighting the importance of developing large-scale audio-language models. At the same time, existing direct dialogue systems (such as Qwen2-Audio) still exhibit limitations in long-context reasoning, indicating the need for further improvement in their contextual understanding capabilities. All our code and data will be open-sourced. Our main contributions are:
30
+
31
+ - We present the first benchmark for evaluating the ability of spoken dialogue systems to understand acoustic information beyond speech content, VoxDialogue, which integrates 12 acoustic dimensions, including speaker attributes (age, gender, accent, language), paralinguistic features (emotion, volume, speed, fidelity, stress, non-verbal expressions), and environmental information (audio, music).
32
+ - We were the first to develop distinct spoken dialogue data synthesis methods tailored for different acoustic attributes. This approach enables large-scale synthesis of spoken dialogue data, supporting extensive training for spoken dialogue models and endowing them with more comprehensive acoustic understanding capabilities.
33
+ - We conducted a systematic evaluation of existing spoken dialogue systems, comparing their performance in terms of understanding acoustic information, supplemented by a qualitative analysis using a GPT-based metric. Specifically, inspired by the MOS (Mean Opinion Score) evaluation mechanism, we provided GPT with descriptive criteria corresponding to different scores, enabling the evaluation model to more accurately assess each response in terms of both acoustic attributes and content quality.
34
+
35
+ # 2 RELATED WORKS
36
+
37
+ # 2.1 SPOKEN DIALOG SYSTEM
38
+
39
+ With the development of large-scale language models, increasingly powerful spoken dialogue models have emerged, utilizing extensive training corpora for single tasks with LLM-based instructions to achieve comprehensive audio understanding capabilities. SpeechGPT (Zhang et al., 2023) integrates discrete speech units into large language models (LLMs), making it a speech-centric model. Qwen-Audio 1/2 (Chu et al., 2023; 2024) established the first large-scale, comprehensive audio model for over 30 audio-related tasks. Similarly, Salmonn (Tang et al., 2023) addresses task complexity in audio models by introducing more intricate story generation tasks. Additionally, some directly use dialogue databases for training. StyleTalk (Lin et al., 2024b) focused on emotional dialogue tasks and introduced the first spoken dialogue model capable of generating responses with varying emotional tones. Recent studies (Cheng et al., 2023c;b;a; Lei et al., 2024; Fu et al., 2024; Yang et al., 2024b) in conversational AI even have begun examining how visual data integration can enhance the contextual awareness of spoken dialogue systems.
40
+
41
+ However, existing spoken dialogue models (Xie & Wu, 2024; Fang et al., 2024) primarily focus on understanding speech content and audio information, with only a few works specifically addressing detailed acoustic attributes within the speech. This oversight results in the loss of crucial information in spoken dialogue, which, as our experiments show, can significantly undermine the quality and effectiveness of response generation in daily dialogue.
42
+
43
+ # 2.2 SPOKEN LANGUAGE BENCHMARK
44
+
45
+ With the rapid development of large-scale audio models (Chu et al., 2024; SpeechTeam, 2024), the scientific community has increasingly recognized the need for a comprehensive benchmark to evaluate spoken dialogue systems. While some progress has been made, many existing benchmarks still fall short. For instance, SUPERB (Yang et al., 2021) was the first benchmark specifically designed for spoken language, but it primarily focuses on coarse-grained understanding tasks. AudioFlamingo (Kong et al., 2024) explores audio comprehension but is limited to text-based dialogue data, lacking a focus on evaluating spoken dialogue. AirBench (Yang et al., 2024a) was the first to propose using GPT-4 (OpenAI, 2024b) to measure spoken dialogue system performance, but its
46
+
47
+ Table 2: Dialogue example of VoxDialog. The bold black text highlights the acoustic and background information unique to spoken dialogue, while the blue text emphasizes responses that depend on these information.
48
+
49
+ <table><tr><td colspan="2">Attributes Example</td></tr><tr><td colspan="2">I. Speaker Information</td></tr><tr><td>Gender</td><td>A: Excuse me, I&#x27;m looking for a reliable smartphone that suits my needs. (Man Voice)B: Certainly, Sir! ... excellent performance and a long-lasting battery, perfect for business tasks.</td></tr><tr><td>Age</td><td>A: What dishes do you recommend? (Young Voice)B: Sure! You might enjoy our specialty beef burger, it&#x27;s fantastic with fresh juice.</td></tr><tr><td>Accent</td><td>A: Could you help me with my schedule? (British Accent)B: Of course, I&#x27;ll assist you in organizing your timetable.</td></tr><tr><td>Language</td><td>A:今天天气怎么样? (Speaking in Chinese)B:今天天气晴朗,气温适中,适合外出活动。</td></tr><tr><td colspan="2">II. Paralinguistic Information</td></tr><tr><td>Emotion</td><td>A: Can you recommend a good movie? (Sad Tone)B: I get it. Maybe Soul—it&#x27;s emotional and might bring you some comfort.</td></tr><tr><td>Volume</td><td>A: Can you tell me how to solve this problem? (Loud Volume)B: Could you speak more quietly? People around are studying.</td></tr><tr><td>Speed</td><td>A: I&#x27;d like to get the burger, a soda, and also the pasta, please. (Speaking Quickly)B: Sorry, could you say that more slowly? I didn&#x27;t catch everything you said.</td></tr><tr><td>Fidelity</td><td>A: Can you tell me the time of the meeting? (Poor Audio Quality)B: I&#x27;m having trouble hearing you. Could you improve the audio quality?</td></tr><tr><td>Stress</td><td>A: I really don&#x27;t like making sushi. (Emphasis on &quot;making sushi&quot;)B: Oh, I see. If you don&#x27;t like making sushi, what other types of cooking do you enjoy?</td></tr><tr><td>Non-verbal</td><td>A: Could you help me move this box? (With a sigh before speaking)</td></tr><tr><td>Expressions</td><td>B: Are you feeling okay? It seems like you&#x27;re really tired. I can take care of it for you.</td></tr><tr><td colspan="2">III. Environmental Information</td></tr><tr><td>Audio Events</td><td>What was that sound just now? (Background sound: airplane engine sound, explosion sound)That was a loud explosion. It sounded like the plane exploded. Hope no one was hurt.</td></tr><tr><td>Music</td><td>A: Hey, what instrument is this song played on? (Music: Piano Song, Sad Song)B: It should be the piano, it sounds so sad.</td></tr></table>
50
+
51
+ evaluation set remains constrained to QA interactions. SpokenWOZ (Si et al., 2024) is a large-scale task-oriented dataset that offers real human interaction data, making it valuable for evaluating task-driven dialogue systems. SD-Eval (Ao et al., 2024), which emphasizes acoustic attributes such as gender, age, accent, and emotion, uses raw audio from confessional-style corpora, making it less suitable for conversational scenarios.
52
+
53
+ However, due to the challenges associated with collecting spoken dialogue data in specific scenarios, existing benchmarks are unable to effectively evaluate whether spoken dialogue systems can understand various information beyond words. To address this limitation, we developed VoxDialogue, a benchmark built using synthetic data that focuses on 12 acoustic dimensions that can significantly influence dialogue content. These dimensions include speaker information (age, gender, accent, language), paralinguistic information (emotion, volume, speed, fidelity, stress, non-verbal expressions), and environmental information (audio events, music). Ultimately, VoxDialogue enables a comprehensive evaluation of the ability of current spoken dialogue systems to process and interpret such detailed acoustic information.
54
+
55
+ # 3 VOXDIALOGUE
56
+
57
+ # 3.1 OVERVIEW
58
+
59
+ Spoken dialogue systems are typically used in daily dialogues (Lin et al., 2024a). As shown in Table 2, we evaluate the performance of spoken dialogue systems across these three categories in daily dialogue scenarios. Beyond understanding the speech content, spoken dialogue systems must also
60
+
61
+ generate the most appropriate responses by considering the speaker's emotions, gender, and other acoustic-related information. Therefore, unlike traditional text-based dialogue benchmarks (Li et al., 2017), we systematically analyze the acoustic characteristics that may influence response content and have developed a tailored evaluation set specifically for spoken dialogue systems. The evaluation set for daily dialogue is divided into the following categories: I. Speaker Information. (1) Age: Responses should be tailored to the speaker's age, adjusting salutations (e.g., Mrs./Miss) or suggesting content appropriate for their age group. (2) Gender: Responses should be gender-specific, modifying salutations (e.g., Mr./Mrs.) or offering preferences based on gender. (3) Accent: Responses should account for the speaker's accent, selecting vocabulary that aligns with their speech (e.g., British people may be more accustomed to using 'timetable' instead of 'schedule'). (4) Language: Responses should be adapted to the speaker's language, choosing the most appropriate language for the response. II. Acoustic Information. (5) Emotion: Responses should detect the speaker's emotional state and provide a suitable reply (e.g., suggesting comforting music when sensing distress). (6) Volume: Responses should consider the speaker's volume, asking them to lower or raise their voice (e.g., requesting quieter speech in quiet environments). (7) Speed: Responses should adjust to the speaker's speech rate, asking them to slow down or clarify if speaking too quickly for comprehension. (8) Fidelity: Responses should detect poor audio quality and ask the speaker to repeat or improve the clarity of their speech for better understanding. (9) Stress: Responses should recognize emphasis on specific words and tailor replies to focus on the stressed content. (10) Non-verbal Expressions: Responses should account for non-verbal cues such as sighs, detecting emotions like tiredness or frustration, and offering assistance accordingly. III. Background Sound. (11) Audio Event: Responses should recognize relevant audio events and adapt accordingly. (12) Music: Responses should adjust to the type and mood of the background music.
62
+
63
+ # 3.2 SPOKEN DIALOGUE GENERATION
64
+
65
+ Stage1: Dialogue Script Synthesis. Building on the methodology of previous studies (Lin et al., 2024a; Cheng et al., 2025), we employed large language models with advanced reasoning capabilities to synthesize spoken conversation scripts tailored to diverse scenarios and acoustic conditions. Specifically, we utilized GPT-4o (OpenAI, 2024a) to pre-generate several rounds of historical conversations, followed by the generation of contextually appropriate responses under various controlled acoustic conditions. This approach ensures that the synthesized dialogue scripts capture a wide range of acoustic features, thereby enhancing their robustness and diversity.
66
+
67
+ Stage2: Spoken Dialogue Generation. We carefully tailored the most appropriate speech synthesis method for each attribute during the generation process. We designed a tailored spoken dialogue synthesis pipeline for each attribute to ensure that the synthesized dialogue data aligns accurately with the corresponding attribute: (1) Gender, Speed and Emotion. We use COSYVOICE-300M-INSTRUCT<sup>1</sup> to achieve condition speech generation based on gender and emotion by adjusting style instructions. (2) Stress, Language, and Non-verbal Expressions. We achieved control over these aspects by adjusting the text content in the COSYVOICE-300M-INSTRUCT (Stress, Non-verbal Expressions) and COSYVOICE-300M-SFT<sup>2</sup> (Language), adding $<$ stress $>$ /stress $>$ , [laughter], or changing the language of the text. (3) Volume, Fidelity, Audio Events, and Music. We used COSYVOICE-300M-SFT to generate the basic speech, then applied post-processing techniques to fine-tune these specific attributes. The details of post-processing are shown in Stage 4. (4) Age. We randomly selected 1,000 speaker samples of different ages from Hechmi et al. (2021) and Tawara et al. (2021) as reference timbres and used COSYVOICE-300M<sup>3</sup> for zero-shot TTS synthesis. (5) Accent. We used the industrial-grade TTS tool (edge-TTS<sup>4</sup>), which offers over 318 timbre references spanning various regions, languages, and genders to achieve precise accent generation.
68
+
69
+ Stage3: Automatic Verification for Spoken Dialogue. To ensure the quality of the synthesized spoken dialogue data, we first employed a pre-trained model to automatically filter out unqualified samples, removing those with generation errors and inconsistent timbre. Specifically, we used the Whisper model (Radford et al., 2023) to filter out all sentences with a word error rate (WER) greater
70
+
71
+ <sup>1</sup>https://huggingface.co/FunAudioLLM/CosyVoice-300M-Instruct
72
+ 2https://huggingface.co/FunAudioLLM/CosyVoice-300M-SFT
73
+ <sup>3</sup>https://huggingface.co/model-scope/CosyVoice-300M
74
+ 4https://github.com/rany2/edge-tts
75
+
76
+ ![](images/f4ec7dd1f239e2872eeea2645c1a280f948e140ce9424a0b9bf46134b0476e50.jpg)
77
+
78
+ ![](images/f98b6134425a0ac27628d18d72cf4c9f9eca703da6aa040eff614c722093cb4a.jpg)
79
+ (a) Word Cloud of VoxDialogue.
80
+
81
+ ![](images/daebf828bd42bf148cddeba4b94dd2ee83b59584d6e59105a72cb2788e430651.jpg)
82
+ (b) The Duration Distribution of Turns.
83
+ (c) The Duration Distribution of Dialogues.
84
+ Figure 1: Visualization of static analysis of VoxDialogue.
85
+
86
+ ![](images/b3bcde75b6dd926c2ce7c2b42753b581a249006b3763be4c10a1456388ff3738.jpg)
87
+
88
+ ![](images/b84f7bed2f3e790a852919f91b8d65116305019317c247284b9cf3dc49878acf.jpg)
89
+ (d) Distribution of Each Attribute.
90
+ (e) Distribution of multi-turn dialogue.
91
+
92
+ than $5\%$ , and applied speaker-diarization-3.1 (Plaquet & Bredin, 2023; Bredin, 2023) to eliminate samples with timbre inconsistencies in speeches of the same speaker throughout dialogue sequence.
93
+
94
+ Stage4: Post-processing for Specific Acoustic Attributes. For attributes such as volume, fidelity, audio events, and music, we performed post-processing to ensure that the audio aligns with the required expectations. For fidelity, according to the Nyquist-Shannon sampling theorem, the sampling rate must be at least twice the highest frequency of the signal to ensure lossless reconstruction. To capture frequencies up to $4\mathrm{kHz}$ , the minimum sampling rate should be $8\mathrm{kHz}$ . Therefore, we downsampled the speech to $4\mathrm{kHz}$ (to simulate the loss of speech signal and represent 'poor' audio quality, resulting in the loss of some speech information) and then resampled it back to $16\mathrm{kHz}$ to simulate poor audio fidelity. For volume, dialogue turns labeled as 'loud' were amplified to simulate by increasing the power 8-fold. For dialogue turns labeled as 'low', the audio power was reduced to $50\%$ of its original level to simulate poor microphone reception. For audio events, a large language model is used to classify events as either temporary or continuous. Temporary audio events, such as a door slamming or a phone ringing, are brief sounds that occur momentarily and are spliced before the first voice segment. In contrast, continuous audio events, like background chatter or street noise, are prolonged and are looped as background sound throughout the conversation. For music, we randomly spliced it before the first speech segment or set it to play in a loop as background sound.
95
+
96
+ Stage5: Human Verification. While large language models (LLMs) are effective at following instructions and generating coherent conversation samples, they are primarily trained on text data and lack exposure to human spoken conversations. As a result, the automatically generated data may exhibit unnatural characteristics. To ensure the naturalness and logical consistency of the spoken conversation sample pairs with the audio features, we employ human annotators for additional quality checks.
97
+
98
+ # 3.3 DATASET STATISTICS
99
+
100
+ Distribution of Attribute Categories. As shown in Figure 1 (d), the distribution of attribute categories in VoxDialogue is balanced, allowing for a comprehensive evaluation of spoken dialogue systems' understanding and dialogue capabilities across various acoustic attributes. In Figure 1 (a), we also present a word cloud of VoxDialogue, where it is evident that the dataset primarily con
101
+
102
+ Table 3: Detailed statistics of the corresponding subsets of each attribute in VoxDialogue. Gray fonts indicate that samples of this attribute are included in other subsets. IN (India), CA (Canada), ZA (South Africa), GB (United Kingdom), SG (Singapore), US (United States), and AU (Australia). Turns represents the total number of turns in each subset, Dialog. indicates the number of dialogues in each subset, Avg denotes the average number of turns per dialogue in each subset, and Dur. refers to the total duration (in hours) of all dialogues in each subset.
103
+
104
+ <table><tr><td>Attributes</td><td>Categories</td><td>Turns</td><td>Dialog.</td><td>Avg</td><td>Dur.</td></tr><tr><td colspan="6">I. Speaker Information</td></tr><tr><td>Gender</td><td>Male, Female</td><td>2040</td><td>340</td><td>6.0</td><td>3.17</td></tr><tr><td>Age</td><td>Youth (15-30), Middle-Aged (30-60), Elderly (60+)</td><td>3096</td><td>447</td><td>6.9</td><td>6.05</td></tr><tr><td>Accent</td><td>IN, CA, ZA, GB, SG, US, AU</td><td>1440</td><td>240</td><td>6.0</td><td>2.20</td></tr><tr><td>Language</td><td>Chinese, English</td><td>2892</td><td>482</td><td>6.0</td><td>3.51</td></tr><tr><td colspan="6">II. Paralinguistic Information</td></tr><tr><td>Emotion</td><td>Neutral, Happy, Sad, Angry, Surprised, Fearful, Diagusted</td><td>1980</td><td>330</td><td>6.0</td><td>2.41</td></tr><tr><td>Volume</td><td>Loud Volume, Low Volume, Normal Volume</td><td>1824</td><td>304</td><td>6.0</td><td>2.08</td></tr><tr><td>Speed</td><td>High Speed, Low Speed, Normal Speed</td><td>2184</td><td>364</td><td>6.0</td><td>2.93</td></tr><tr><td>Fidelity</td><td>Low Fidelity, Normal Fidelity</td><td>2196</td><td>366</td><td>6.0</td><td>3.36</td></tr><tr><td>Stress</td><td>Stress, No Stress</td><td>2354</td><td>392</td><td>6.0</td><td>2.51</td></tr><tr><td>NVE</td><td>Laughter, No Laughter</td><td>2046</td><td>341</td><td>6.0</td><td>3.68</td></tr><tr><td colspan="6">III. Environmental Information</td></tr><tr><td>Audio</td><td>The caption of different audio. (e.g., The wind is blowing and rustling occurs.)</td><td>5000</td><td>500</td><td>10.0</td><td>5.25</td></tr><tr><td>Music</td><td>The aspect list of different music pieces. (e.g., [steeldrum, higher register, amateur recording])</td><td>3734</td><td>420</td><td>8.9</td><td>5.42</td></tr><tr><td>Overall</td><td></td><td>30.7K</td><td>4.5K</td><td>6.8</td><td>42.56</td></tr></table>
105
+
106
+ sists of daily dialogue, featuring a large number of natural spoken words such as “yeah,” which are representative of daily spoken interactions. This makes it suitable for assessing the performance of spoken dialogue systems in real-world dialogue scenarios. Additionally, the dataset contains numerous acoustically relevant keywords, such as “heard,” “loud,” and “sound,” further supporting the evaluation of acoustic-related aspects of dialogue understanding.
107
+
108
+ Distribution of Dialogue Turns and Duration. All dialogues in our dataset are multi-turn dialogues. In Figure 1 (e), we show the distribution of dialogue turns, with the majority consisting of 6 turns and a maximum of 10 turns. This allows for a comprehensive evaluation of spoken dialogue systems' ability to understand contexts of varying lengths. In addition, Figures 1 (b) and 1 (c) illustrate the distribution of each turn and the overall dialogue length, respectively, showing that most sentences are approximately 4 seconds long. This implies that the system must understand the context and reason effectively before generating a response.
109
+
110
+ Statistics for Subset of Each Attribute. We present the detailed statistics of each attribute in VoxDialogue in Table 3, covering 35 different categories across 12 attributes. The average number of turns per dialogue exceeds 6, with each attribute containing more than 300 dialogues, ensuring comprehensive reflection of dialogue capabilities.
111
+
112
+ # 4 BENCHMARK FOR SPOKEN DIALOGUE SYSTEM
113
+
114
+ # 4.1 TASK DEFINITION
115
+
116
+ The task of a spoken dialogue system is to generate appropriate responses based on the contextual information from the sequence of human dialogue (e.g., the user's utterance sequence) and the preceding assistant response sequence, where the total number of dialogue turns is denoted by $t$ . The goal of the spoken dialogue system is to generate the most suitable response based on the previous
117
+
118
+ ![](images/8e2bd697647489332f0230fe03e22b0cfcd58781dc01fdb589467dc98833bb66.jpg)
119
+ (a) Comparison of BLEU Across Methods and Attributes.
120
+
121
+ ![](images/fd8250fef90ac141cdfb3cd4cd0e72be3b98344f686bb88a36d6711ed70f9ee0.jpg)
122
+ (b) Comparison of ROUGE-L Across Methods and Attributes.
123
+
124
+ ![](images/335ab2868fbc1bc99fc519b8712e8bd06531281e417d33719946341b8772de09.jpg)
125
+ (c) Comparison of METEOR Across Methods and Attributes.
126
+
127
+ ![](images/23a282e6e73083d9529e8e73ba11245757698dd6a316d80fb39b745394dcdf7b.jpg)
128
+ (d) Comparison of BERTScore Across Methods and Attributes.
129
+ Figure 2: The comparison of spoken dialogue performance across 12 different attribute-specific test sets on the VoxDialogue dataset.
130
+
131
+ $t$ utterances and the $t - 1$ historical replies. In our work, we evaluate the performance of the spoken dialogue system by focusing solely on the final utterance of each dialogue.
132
+
133
+ # 4.2 EVALUATION METRICS
134
+
135
+ To assess the model's performance, we conducted separate tests on a subset of Voxdialogue. Drawing on previous research (Ao et al., 2024), we utilized both quantitative and qualitative metrics for a comprehensive evaluation. The quantitative evaluation focused on two key aspects: content and style. For content evaluation, we employed widely recognized text generation metrics, including vocabulary-level measures such as BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and ME-TEOR (Banerjee & Lavie, 2005), alongside semantic-level metrics like BERTScore (Zhang et al., 2019). For style evaluation, we calculated the weighted F1 score of speech sentiment.
136
+
137
+ In addition to these quantitative assessments, we conducted a qualitative analysis using GPT-based metric (Yang et al., 2024a). The meaning of each score is as follows: 1: Contextually relevant but lacks attribute information. 2: Partially relevant to the context but feels unnatural, with no attribute information. 3: Partially relevant to the context, with mention of the attribute. 4: Contextually
138
+
139
+ Table 4: GPT-based Metric Comparison of Different Spoken Dialogue Models on VoxDialogue.
140
+
141
+ <table><tr><td rowspan="2">Method</td><td colspan="4">Speaker Info</td><td colspan="6">Paralinguistic Info</td><td colspan="2">Env Info</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td colspan="13">ASR-Based Spoken Dialogue System</td></tr><tr><td>FunAudioLLM (SpeechTeam, 2024)</td><td>4.32</td><td>4.39</td><td>3.57</td><td>4.61</td><td>4.09</td><td>1.82</td><td>1.92</td><td>1.79</td><td>3.13</td><td>2.87</td><td>3.47</td><td>3.59</td></tr><tr><td colspan="13">Direct Spoken Dialogue System</td></tr><tr><td>Audio-Flamingo (Kong et al., 2024)</td><td>1.00</td><td>1.00</td><td>1.04</td><td>1.72</td><td>1.00</td><td>1.20</td><td>1.14</td><td>1.26</td><td>1.34</td><td>3.06</td><td>1.37</td><td>1.11</td></tr><tr><td>SALMONN (Tang et al., 2023)</td><td>1.99</td><td>1.64</td><td>1.78</td><td>3.50</td><td>1.84</td><td>2.88</td><td>2.27</td><td>2.29</td><td>3.86</td><td>2.59</td><td>2.15</td><td>2.23</td></tr><tr><td>Qwen-Audio (Chu et al., 2023)</td><td>1.36</td><td>1.04</td><td>1.28</td><td>1.04</td><td>1.06</td><td>1.48</td><td>1.08</td><td>1.32</td><td>2.49</td><td>2.65</td><td>1.42</td><td>1.18</td></tr><tr><td>Qwen2-Audio (Chu et al., 2024)</td><td>3.46</td><td>4.18</td><td>2.71</td><td>4.43</td><td>3.73</td><td>3.06</td><td>3.29</td><td>2.98</td><td>3.93</td><td>3.46</td><td>3.81</td><td>3.98</td></tr></table>
142
+
143
+ relevant and natural, mentioning the attribute, but could be improved. 5: Contextually relevant, smooth, natural, and accurately addresses the attribute. We have included all the evaluated prompt templates in supplementary materials. Please refer to the supplementary materials for more details.
144
+
145
+ # 4.3 SPOKEN DIALOGUE SYSTEM
146
+
147
+ In order to build a comprehensive benchmark, we evaluated two main types of spoken dialogue system approaches: (1) ASR-based dialogue systems (e.g., FunAudioLLM (Fang et al., 2024)) and (2) direct spoken dialogue systems $^5$ (e.g., Audio-Flamingo (Kong et al., 2024), SALMONN (Tang et al., 2023), Qwen-Audio Instruct (Chu et al., 2023), and Qwen2-Audio Instruct (Chu et al., 2024)). Figure 2 presents a comparative analysis using four metrics across various attributes on the VoxDialogue dataset. Based on the experimental results, we gained the following key insights:
148
+
149
+ ASR-based systems excel in context-sensitive tasks. In attributes that can be inferred through context understanding, ASR-based systems (such as FunAudioLLM) show significant advantages. ASR systems first transcribe speech into text and then process it, allowing them to more effectively capture and analyze the context of a conversation. For example, in attributes like Emotion and Speaker Information(Age, Gender, Accent, Language), FunAudioLLM consistently outperforms direct spoken dialogue systems. The results from BLEU, ROUGE-L, METEOR, and BERTScore metrics indicate that FunAudioLLM achieves higher scores, such as in emotion (3.22 BLEU, 14.93 ROUGE-L, 19.31 METEOR, 86.92 BERTScore). This proves that most current direct spoken dialogue systems lack adequate context understanding capabilities and are far weaker than text-based large language models. Additionally, although ASR-based models may have limitations in understanding acoustic information, comparing them provides a valuable performance reference, representing the upper bound performance without the integration of acoustic information.
150
+
151
+ Advantages of direct spoken dialogue systems in acoustic attribute processing. Although ASR-based systems can leverage the strong context understanding capabilities of large language models, they struggle with attributes that heavily rely on sound understanding (such as volume, fidelity, speed, and other paralinguistic information). ASR-based methods face challenges when addressing dialogue tasks related to these attributes. In contrast, direct systems like Qwen2-Audio excel in tasks involving these acoustic properties. The results show that Qwen2-Audio outperforms other systems in these categories. For instance, Qwen2-Audio achieved the highest scores for volume (4.56 BLEU, 16.13 ROUGE-L, 22.82 METEOR, and 87.99 BERTScore), demonstrating its ability to handle loud and soft speech variations more effectively. Similarly, fidelity is another strong point for direct dialogue systems. Qwen2-Audio's excellent performance in handling varying fidelity levels (3.38 BLEU, 14.36 ROUGE-L, 12.78 METEOR, 85.66 BERTScore) confirms that spoken dialogue tasks, which heavily rely on acoustic information beyond words.
152
+
153
+ # 4.4 QUALITATIVE COMPARISON
154
+
155
+ Inspired by Yang et al. (2024a), we also attempted to use GPT-4 (OpenAI, 2024b) for evaluation, focusing on whether the responses exhibit the specific attribute characteristics and whether they
156
+
157
+ provide reasonable replies to the previous context. As shown in Table 4, we present the qualitative testing results of different methods across 12 attributes. Specifically, a score of 3 represents mention of attribute information, 4 represents a reasonable and natural response.
158
+
159
+ We observed that the conclusions from the qualitative tests largely align with those from the quantitative evaluations. For context-driven attributes (such as speaker information and emotion), ASR-based dialogue models continue to demonstrate the best performance. However, for attributes that are highly dependent on acoustic information (such as speed, fidelity, audio, and music), direct spoken dialogue models like Qwen2-Audio significantly outperform FunAudioLLM, underscoring the importance of developing direct spoken dialogue models.
160
+
161
+ Additionally, we found that Qwen-Audio often responds with descriptive sentences related to the query, which severely affects its performance. The SALMONN model frequently repeats parts of the query, leading to higher quantitative scores in some attributes (e.g., a BLEUScore of 87.53 for Stress, 0.53 higher than Qwen2-Audio), but its qualitative performance is inferior to Qwen2-Audio (with a GPT-4-based metric score 0.97 lower). This indicates that most current large audio-language models are focused on QA-style interactions, and are not yet well-suited for dialogue-style conversations.
162
+
163
+ # 5 ETHICAL DISCUSSION
164
+
165
+ Our dataset incorporates certain attributes that may introduce bias (e.g., gender) as dimensions to evaluate the model's ability to process diverse acoustic information. However, this introduces potential risks of unfairness, such as biased or stereotypical responses. In Appendix C, we outline the fairness challenges faced by spoken dialogue models.
166
+
167
+ To promote the development of unbiased spoken dialogue systems, we conducted manual filtering of all potentially sensitive data to ensure that the dataset complies with Collins & Clément (2012) and excludes examples that could cause harm due to attribute-related biases. Looking ahead, we are committed to proactively identifying and addressing these challenges, contributing to the creation of fairer and more inclusive spoken dialogue systems. Furthermore, we pledge to continually update this work to advance the development of equitable conversational AI.
168
+
169
+ # 6 CONCLUSION
170
+
171
+ In this work, we introduced VoxDialogue, a comprehensive benchmark designed to evaluate spoken dialogue systems' ability to understand information beyond words. By identifying 12 critical attributes tied to acoustic cues such as speech rate, volume, emphasis, and background sounds, we constructed a challenging test set of 4.5K multi-turn dialogue samples. Our experiments demonstrated that while ASR-based systems excel at context understanding and textual interpretation, they fail to capture important acoustic signals that are essential for contextually appropriate responses. In contrast, direct spoken dialogue systems outperform ASR-based models in processing acoustic properties, but their limited ability to understand complex dialogue contexts remains a significant shortcoming. The findings highlight the importance of acoustic information in enhancing the performance of spoken dialogue systems and reveal the current limitations in both ASR-based and direct spoken dialogue models.
172
+
173
+ # REPRODUCIBILITY STATEMENT
174
+
175
+ All of our data, code, and model weights will be open-sourced.
176
+
177
+ - Section 3 provides detailed instructions on the construction of VoxDialogue, including a comprehensive list of all relevant open-source resources.
178
+ - Section 4.1 outlines the detailed task definitions.
179
+ - Section 4.2 elaborates on the evaluation metrics and specific details.
180
+ - All of our prompt templates are included in the Supplementary Material.
181
+
182
+ # ACKNOWLEDGMENTS
183
+
184
+ This work was supported in part by National Natural Science Foundation of China under Grant No. 62222211 and No.624B2128.
185
+
186
+ # REFERENCES
187
+
188
+ Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, and Zhizheng Wu. Sd-eval: A benchmark dataset for spoken dialogue understanding beyond words. arXiv preprint arXiv:2406.13340, 2024.
189
+ Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65-72, 2005.
190
+ Hervé Bredin. pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe. In Proc. INTERSPEECH 2023, 2023.
191
+ Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 5016-5026, 2018.
192
+ Xize Cheng, Rongjie Huang, Linjun Li, Tao Jin, Zehan Wang, Aoxiong Yin, Minglei Li, Xinyu Duan, Zhou Zhao, et al. Transface: Unit-based audio-visual speech synthesizer for talking head translation. arXiv preprint arXiv:2312.15197, 2023a.
193
+ Xize Cheng, Tao Jin, Rongjie Huang, Linjun Li, Wang Lin, Zehan Wang, Ye Wang, Huadai Liu, Aoxiong Yin, and Zhou Zhao. Mixspeech: Cross-modality self-learning with audio-visual stream mixup for visual speech translation and recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15735–15745, 2023b.
194
+ Xize Cheng, Tao Jin, Linjun Li, Wang Lin, Xinyu Duan, and Zhou Zhao. Opensr: Open-modality speech recognition via maintaining multi-modality alignment. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6592-6607, 2023c.
195
+ Xize Cheng, Dongjie Fu, Xiaoda Yang, Minghui Fang, Ruofan Hu, Jingyu Lu, Bai Jionghao, Zehan Wang, Shengpeng Ji, Rongjie Huang, et al. Omnichat: Enhancing spoken dialogue systems with scalable synthetic data for diverse scenarios. arXiv preprint arXiv:2501.01384, 2025.
196
+ Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shiliang Zhang, Zhijie Yan, Chang Zhou, and Jingren Zhou. Qwen-audio: Advancing universal audio understanding via unified large-scale audio-language models. arXiv preprint arXiv:2311.07919, 2023.
197
+ Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, et al. Qwen2-audio technical report. arXiv preprint arXiv:2407.10759, 2024.
198
+ Katherine A Collins and Richard Clément. Language and prejudice: Direct and moderated effects. Journal of Language and Social Psychology, 31(4):376-396, 2012.
199
+ Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, et al. Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens. arXiv preprint arXiv:2407.05407, 2024.
200
+ Solène Evain, Ha Nguyen, Hang Le, Marcely Zanon Boito, Salima Mdhaffar, Sina Alisamir, Ziyi Tong, Natalia Tomashenko, Marco Dinarelli, Titouan Parcollet, et al. Lebenchmark: A reproducible framework for assessing self-supervised representation learning from speech. In INTER-SPEECH 2021: Conference of the International Speech Communication Association, 2021.
201
+ Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, and Yang Feng. Llama-omni: Seamless speech interaction with large language models. arXiv preprint arXiv:2409.06666, 2024.
202
+
203
+ Dongjie Fu, Xize Cheng, Xiaoda Yang, Wang Hanting, Zhou Zhao, and Tao Jin. Boosting speech recognition robustness to modality-distortion with contrast-augmented prompts. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 3838-3847, 2024.
204
+ Khaled Hechmi, Trung Ngo Trong, Ville Hautamäki, and Tomi Kinnunen. Voxceleb enrichment for age and gender recognition. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 687-693. IEEE, 2021.
205
+ Matthew B Hoy. Alexa, siri, cortana, and more: an introduction to voice assistants. Medical reference services quarterly, 37(1):81-88, 2018.
206
+ Rongjie Huang, Huadai Liu, Xize Cheng, Yi Ren, Linjun Li, Zhenhui Ye, Jinzheng He, Lichao Zhang, Jinglin Liu, Xiang Yin, et al. Av-transpeech: Audio-visual robust speech-to-speech translation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8590–8604, 2023.
207
+ Shengpeng Ji, Yifu Chen, Minghui Fang, Jialong Zuo, Jingyu Lu, Hanting Wang, Ziyue Jiang, Long Zhou, Shujie Liu, Xize Cheng, et al. Wavchat: A survey of spoken dialogue models. arXiv preprint arXiv:2411.13577, 2024a.
208
+ Shengpeng Ji, Jialong Zuo, Wen Wang, Minghui Fang, Siqi Zheng, Qian Chen, Ziyue Jiang, Hai Huang, Zehan Wang, Xize Cheng, et al. Controlspeech: Towards simultaneous zero-shot speaker cloning and zero-shot language style control with decoupled codec. arXiv preprint arXiv:2406.01205, 2024b.
209
+ Zhifeng Kong, Arushi Goel, Rohan Badlani, Wei Ping, Rafael Valle, and Bryan Catanzaro. Audio flamingo: A novel audio language model with few-shot learning and dialogue abilities. arXiv preprint arXiv:2402.01831, 2024.
210
+ Keon Lee, Kyumin Park, and Daeyoung Kim. Dailytalk: Spoken dialogue dataset for conversational text-to-speech. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5. IEEE, 2023.
211
+ Songju Lei, Xize Cheng, Mengjiao Lyu, Jianqiao Hu, Jintao Tan, Runlin Liu, Lingyu Xiong, Tao Jin, Xiandong Li, and Zhou Zhao. Uni-dubbing: Zero-shot speech synthesis from visual articulation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10082-10099, 2024.
212
+ Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 986-995, 2017.
213
+ Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74-81, 2004.
214
+ Guan-Ting Lin, Cheng-Han Chiang, and Hung-yi Lee. Advancing large language models to capture varied speaking styles and respond properly in spoken conversations. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6626-6642, Bangkok, Thailand, August 2024a. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.358.
215
+ Guan-Ting Lin, Cheng-Han Chiang, and Hung-yi Lee. Advancing large language models to capture varied speaking styles and respond properly in spoken conversations. arXiv preprint arXiv:2402.12786, 2024b.
216
+ Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. Does gender matter? towards fairness in dialogue systems. arXiv preprint arXiv:1910.10486, 2019.
217
+ Xubo Liu, Egor Lakomkin, Konstantinos Vougioukas, Pingchuan Ma, Honglie Chen, Ruiming Xie, Morrie Doulaty, Niko Moritz, Jachym Kolar, Stavros Petridis, et al. Synthvsr: Scaling up visual speech recognition with synthetic supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18806-18815, 2023.
218
+
219
+ Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, and Xie Chen. emotion2vec: Self-supervised pre-training for speech emotion representation. arXiv preprint arXiv:2312.15185, 2023.
220
+ OpenAI. Gpt-4o system card. https://cdn.openai.com/gpt-4o-system-card.pdf, 2024a.
221
+ OpenAI. Chatgpt can now see, hear, and speak. https://openai.com/index/chatgpt-can-now-see-hear-and-speak/, 2024b.
222
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311-318, 2002.
223
+ Alexis Plaqet and Hervé Bredin. Powerset multi-class cross entropy loss for neural speaker diarization. In Proc. INTERSPEECH 2023, 2023.
224
+ Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International conference on machine learning, pp. 28492-28518. PMLR, 2023.
225
+ Suwon Shon, Ankita Pasad, Felix Wu, Pablo Brusco, Yoav Artzi, Karen Livescu, and Kyu J Han. Slue: New benchmark tasks for spoken language understanding evaluation on natural speech. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7927-7931. IEEE, 2022.
226
+ Shuzheng Si, Wentao Ma, Haoyu Gao, Yuchuan Wu, Ting-En Lin, Yinpei Dai, Hangyu Li, Rui Yan, Fei Huang, and Yongbin Li. Spokenwoz: A large-scale speech-text benchmark for spoken task-oriented dialogue agents. Advances in Neural Information Processing Systems, 36, 2024.
227
+ Tongyi SpeechTeam. Funaudiollm: Voice understanding and generation foundation models for natural interaction between humans and llms. arXiv preprint arXiv:2407.04051, 2024.
228
+ Hsuan Su, Rebecca Qian, Chinnadhurai Sankar, Shahin Shayandeh, Shang-Tse Chen, Hung-yi Lee, and Daniel M Bikel. Step by step to fairness: Attributing societal bias in task-oriented dialogue systems. arXiv preprint arXiv:2311.06513, 2023.
229
+ Changli Tang, Wenyi Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, and Chao Zhang. Salmonn: Towards generic hearing abilities for large language models. arXiv preprint arXiv:2310.13289, 2023.
230
+ Naohiro Tawara, Atsunori Ogawa, Yuki Kitagishi, and Hosana Kamiyama. Age-vox-celeb: Multi-modal corpus for facial and speech estimation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6963-6967. IEEE, 2021.
231
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
232
+ Zhifei Xie and Changqiao Wu. Mini-omni: Language models can hear, talk while thinking in streaming. arXiv preprint arXiv:2408.16725, 2024.
233
+ Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, and Jingren Zhou. AIR-bench: Benchmarking large audio-language models via generative comprehension. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1979–1998, Bangkok, Thailand, August 2024a. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.109.
234
+ Shu Wen Yang, Po Han Chi, Yung Sung Chuang, Cheng I Jeff Lai, Kushal Lakhotia, Yist Y Lin, Andy T Liu, Jiatong Shi, Xuankai Chang, Guan Ting Lin, et al. Superb: Speech processing universal performance benchmark. In 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021, pp. 3161-3165. International Speech Communication Association, 2021.
235
+
236
+ Xiaoda Yang, Xize Cheng, Dongjie Fu, Minghui Fang, Jialung Zuo, Shengpeng Ji, Zhou Zhao, and Jin Tao. Synctalklip: Highly synchronized lip-readable speaker generation with multi-task learning. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 8149-8158, 2024b.
237
+ Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. arXiv preprint arXiv:2305.11000, 2023.
238
+ Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675, 2019.
239
+
240
+ Table 5: Detailed Comparison of Spoken Dialogue Systems across Various Metrics
241
+ (a) BLEU Scores
242
+
243
+ <table><tr><td rowspan="2">Method</td><td colspan="4">Speaker Info</td><td colspan="6">Paralinguistic Info</td><td colspan="2">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>2.53</td><td>2.66</td><td>3.34</td><td>2.72</td><td>3.22</td><td>4.20</td><td>2.77</td><td>2.65</td><td>3.58</td><td>2.37</td><td>3.34</td><td>3.24</td></tr><tr><td>Audio-Flamingo</td><td>2.08</td><td>2.40</td><td>2.83</td><td>0.01</td><td>2.74</td><td>3.95</td><td>2.70</td><td>2.50</td><td>2.58</td><td>1.41</td><td>3.38</td><td>2.81</td></tr><tr><td>Qwen-Audio</td><td>2.26</td><td>2.56</td><td>3.05</td><td>1.74</td><td>3.01</td><td>3.78</td><td>2.61</td><td>0.54</td><td>3.02</td><td>2.85</td><td>3.60</td><td>2.87</td></tr><tr><td>SALMONN</td><td>2.29</td><td>2.35</td><td>2.88</td><td>3.09</td><td>2.88</td><td>4.44</td><td>2.73</td><td>2.82</td><td>2.33</td><td>2.04</td><td>3.55</td><td>2.86</td></tr><tr><td>Qwen2-Audio</td><td>2.22</td><td>2.52</td><td>3.20</td><td>3.18</td><td>3.11</td><td>4.56</td><td>2.92</td><td>3.38</td><td>2.93</td><td>2.10</td><td>2.97</td><td>2.97</td></tr></table>
244
+
245
+ (b) ROUGE-L Scores
246
+
247
+ <table><tr><td rowspan="2">Method</td><td colspan="4">Speaker Info</td><td colspan="6">Paralinguistic Info</td><td colspan="2">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>12.15</td><td>12.95</td><td>15.07</td><td>15.88</td><td>14.93</td><td>8.28</td><td>7.97</td><td>4.47</td><td>13.49</td><td>10.67</td><td>12.01</td><td>11.97</td></tr><tr><td>Audio-Flamingo</td><td>6.12</td><td>6.15</td><td>6.62</td><td>0.03</td><td>5.78</td><td>5.48</td><td>7.67</td><td>7.57</td><td>5.12</td><td>7.41</td><td>5.91</td><td>7.88</td></tr><tr><td>Qwen-Audio</td><td>8.34</td><td>9.62</td><td>7.12</td><td>12.09</td><td>8.24</td><td>0.71</td><td>6.61</td><td>14.36</td><td>7.76</td><td>12.36</td><td>7.29</td><td>9.01</td></tr><tr><td>SALMONN</td><td>11.52</td><td>11.43</td><td>10.51</td><td>14.80</td><td>11.81</td><td>13.30</td><td>10.56</td><td>10.22</td><td>15.71</td><td>11.01</td><td>10.05</td><td>10.51</td></tr><tr><td>Qwen2-Audio</td><td>11.51</td><td>11.44</td><td>13.18</td><td>15.66</td><td>14.18</td><td>23.13</td><td>17.34</td><td>9.58</td><td>13.45</td><td>11.36</td><td>12.23</td><td>12.18</td></tr></table>
248
+
249
+ (c) METEOR Scores
250
+
251
+ <table><tr><td rowspan="2">Method</td><td colspan="4">Speaker Info</td><td colspan="6">Paralinguistic Info</td><td colspan="2">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>16.89</td><td>20.12</td><td>21.03</td><td>15.21</td><td>19.31</td><td>10.19</td><td>9.83</td><td>8.16</td><td>16.95</td><td>10.31</td><td>12.91</td><td>12.42</td></tr><tr><td>Audio-Flamingo</td><td>8.23</td><td>7.79</td><td>10.03</td><td>0.25</td><td>9.17</td><td>8.31</td><td>8.69</td><td>11.04</td><td>8.12</td><td>7.88</td><td>9.93</td><td>11.01</td></tr><tr><td>Qwen-Audio</td><td>12.87</td><td>14.16</td><td>12.92</td><td>11.06</td><td>13.12</td><td>1.41</td><td>5.28</td><td>6.11</td><td>10.92</td><td>21.41</td><td>11.92</td><td>11.68</td></tr><tr><td>SALMONN</td><td>11.02</td><td>10.81</td><td>11.21</td><td>10.35</td><td>11.13</td><td>11.78</td><td>10.14</td><td>10.17</td><td>11.84</td><td>9.03</td><td>11.18</td><td>11.08</td></tr><tr><td>Qwen2-Audio</td><td>12.96</td><td>16.15</td><td>18.24</td><td>14.37</td><td>17.05</td><td>22.11</td><td>19.08</td><td>12.78</td><td>14.01</td><td>13.11</td><td>13.21</td><td>13.39</td></tr></table>
252
+
253
+ (d) BERTScore
254
+
255
+ <table><tr><td rowspan="2">Method</td><td colspan="4">Speaker Info</td><td colspan="6">Paralinguistic Info</td><td colspan="2">Background</td></tr><tr><td>Age</td><td>Gen</td><td>Acc</td><td>Lan</td><td>Emo</td><td>Vol</td><td>Spd</td><td>Fid</td><td>Str</td><td>NVE</td><td>Aud</td><td>Mus</td></tr><tr><td>FunAudioLLM</td><td>86.14</td><td>86.65</td><td>87.24</td><td>86.97</td><td>86.90</td><td>84.87</td><td>85.03</td><td>84.36</td><td>87.51</td><td>83.79</td><td>86.02</td><td>86.19</td></tr><tr><td>Audio-Flamingo</td><td>83.10</td><td>83.84</td><td>83.86</td><td>75.28</td><td>83.78</td><td>84.91</td><td>84.71</td><td>84.81</td><td>83.78</td><td>82.89</td><td>83.78</td><td>84.74</td></tr><tr><td>Qwen-Audio</td><td>83.40</td><td>84.46</td><td>83.84</td><td>85.79</td><td>84.34</td><td>85.53</td><td>85.34</td><td>87.12</td><td>79.55</td><td>83.85</td><td>83.95</td><td>84.14</td></tr><tr><td>SALMONN</td><td>84.60</td><td>86.75</td><td>86.65</td><td>86.44</td><td>86.05</td><td>87.27</td><td>86.06</td><td>86.74</td><td>87.53</td><td>84.92</td><td>85.63</td><td>86.06</td></tr><tr><td>Qwen2-Audio</td><td>85.59</td><td>85.80</td><td>86.70</td><td>86.50</td><td>86.65</td><td>88.00</td><td>87.30</td><td>85.66</td><td>87.08</td><td>84.51</td><td>87.22</td><td>87.19</td></tr></table>
256
+
257
+ # A MORE EXPERIMENT RESULTS
258
+
259
+ # A.1 THE DETAILED PERFORMANCE COMPARISON
260
+
261
+ For comparison, the detailed performance corresponding to Figure 2 is presented in Table 5.
262
+
263
+ # B LIMITATION
264
+
265
+ Our work heavily relies on synthetic datasets. Although prior research (Liu et al., 2023) has shown that synthetic data can be effectively used for training and evaluation, a domain gap persists between
266
+
267
+ synthetic and real-world data. This gap may affect the generalization of models trained on synthetic data when applied to real-world dialogue scenarios.
268
+
269
+ However, since our focus is on understanding acoustic information, synthetic data proves particularly useful in simulating various acoustic cues found in real conversational settings. Additionally, the synthetic dataset offers more diverse and controllable dialogue content, making it sufficient for evaluating whether spoken dialogue systems can understand information beyond text.
270
+
271
+ To properly assess the performance of dialogue systems in real-world scenarios, it is crucial to use datasets based on authentic conversational environments. We believe that constructing a separate real-world dialogue evaluation benchmark, independent of our work, would be more effective in evaluating spoken dialogue systems' performance in real scenarios than using a single dataset to assess both acoustic information comprehension and real-world dialogue capabilities.
272
+
273
+ # C ETHICAL DISCUSSIONS ON SPOKEN DIALOGUE SYSTEMS
274
+
275
+ # C.1 FAIRNESS CHALLENGES IN SPOKEN CONVERSATION.
276
+
277
+ Ensuring fairness in spoken dialogue systems involves several challenges, particularly when addressing attributes like gender:
278
+
279
+ I. Difficulty in Identifying Gender Bias: Beyond explicit expressions (e.g., "sir" or "madam"), implicit biases (e.g., career or family-related topics) (Liu et al., 2019) are deeply embedded in existing large language models, making it difficult to guarantee unbiased responses.
280
+
281
+ II. Fairness and Attribute Understanding: While understanding gender attributes can enhance personalization and conversational relevance, over-reliance may reinforce stereotypes. Conversely, completely eliminating gender considerations could limit the model's ability to provide contextually appropriate responses in scenarios where gender information is explicitly relevant. Therefore, an appropriate balance between fairness and attribute understanding should be achieved, ensuring that biases do not cause harm while fostering diversity in responses and improving attribute-specific relevance.
282
+
283
+ III. Difficulty in Evaluating Bias: Current fairness metrics (Su et al., 2023) often fail to capture nuanced and context-dependent biases in spoken dialogue systems, especially in open-ended and multi-turn conversations.
284
+
285
+ # C.2 MITIGATION STRATEGIES.
286
+
287
+ To address these challenges, we commit to implementing the following measures:
288
+
289
+ I. Manual Filtering: We conducted manual filtering of all potentially sensitive data to ensure that the dataset complies with Collins & Clément (2012) and excludes examples that could cause harm due to attribute-related biases.
290
+ II. BiasWarnings: Clear disclaimers will be included in the documentation to highlight potential gender biases and encourage developers to consider fairness during model development.
291
+ III. Continuous Dataset Updates: We will continuously update and refine the dataset to address fairness issues. Any subsets or evaluation components found to introduce risks of bias will be removed or adjusted as necessary.
2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ff61c0950547fe26f4c6197976ed0fb4b90e061c3f1134f027871c9538db441
3
+ size 858993
2025/VoxDialogue_ Can Spoken Dialogue Systems Understand Information Beyond Words_/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/651ec223-6a7e-4d1c-bdf9-1d5f8132dcb7_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/651ec223-6a7e-4d1c-bdf9-1d5f8132dcb7_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/651ec223-6a7e-4d1c-bdf9-1d5f8132dcb7_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37e9b57ef18ec90d35f110b09b73d1924876c064b39bc955693ce2b1716e22d3
3
+ size 901664
2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/full.md ADDED
@@ -0,0 +1,577 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # W-PCA BASED GRADIENT-FREE PROXY FOR EFFICIENT SEARCH OF LIGHTWEIGHT LANGUAGE MODELS
2
+
3
+ Shang Wang
4
+
5
+ ShanghaiTechUniversity
6
+
7
+ wangshang2024@shanghaiitech.edu.cn
8
+
9
+ # ABSTRACT
10
+
11
+ The demand for efficient natural language processing (NLP) systems has led to the development of lightweight language models. Previous work in this area has primarily focused on manual design or training-based neural architecture search (NAS) methods. Recently, zero-shot NAS methods have been proposed for evaluating language models without the need for training. However, prevailing approaches to zero-shot NAS often face challenges such as biased evaluation metrics and computational inefficiencies. In this paper, we introduce weight-weighted PCA (W-PCA), a novel zero-shot NAS method specifically tailored for lightweight language models. Our approach utilizes two evaluation proxies: the parameter count and the number of principal components with cumulative contribution exceeding $\eta$ in the feed-forward neural (FFN) layer. Additionally, by eliminating the need for gradient computations, we optimize the evaluation time, thus enhancing the efficiency of designing and evaluating lightweight language models. We conduct a comparative analysis on the GLUE and SQuAD datasets to evaluate our approach. The results demonstrate that our method significantly reduces training time compared to one-shot NAS methods and achieves higher scores in the testing phase compared to previous state-of-the-art training-based methods. Furthermore, we perform ranking evaluations on a dataset sampled from the FlexiBERT search space. Our approach exhibits superior ranking correlation and further reduces solving time compared to other zero-shot NAS methods that require gradient computation.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Large language models (LLMs) have shown exceptional performance across various domains (OpenAI, 2023). However, their size and computational demands pose challenges in resource-constrained environments like mobile devices and edge computing. Therefore, there is a growing need to explore lightweight language models that can operate efficiently on these platforms. One approach to address this challenge is through knowledge distillation (KD) (Liu et al., 2023; Li et al., 2023c; Li & Jin, 2022; Li et al., 2023b; Li, 2022; Li et al., 2024a), where a larger language model acts as a teacher to train a smaller, more lightweight language model (Turc et al., 2019; Sanh et al., 2020; Jiao et al., 2020; Sun et al., 2020; Wang et al., 2020). However, the student models trained for these tasks were manually designed. To effectively search for student models, the use of neural architecture search (NAS) has become essential.
16
+
17
+ NAS is a technique that automates the process of designing neural networks, enabling the exploration of a wide range of architectures to identify the most optimal ones for a given task. Vanilla NAS approaches primarily used reinforcement learning (Zoph & Le, 2016) or genetic algorithms (Real et al., 2019) t
18
+
19
+ ![](images/4dbc48e8ded114bcc41aac4bc8bcce30d5523a162b3ff25d57bd0f691dfc67aa.jpg)
20
+ Figure 1: Comparison of the running time between W-PCA and other training-based NAS methods for lightweight language models. Our method achieves a substantial reduction in search time for the optimal network structure by two to three orders of magnitude, as we do not need to train the supernet.
21
+
22
+ ![](images/a0a13d039cdb550dfb39024b133e64c5a9992b09df17f7ba7a5eb4637eb82f14.jpg)
23
+
24
+ ![](images/0c67c36dd4836ba653239446ee060011c8f0d63066683f8b6eb3b94017cfd2ef.jpg)
25
+
26
+ ![](images/de1f974e3e0308f3218630f8f1e6e240f52417d136516f92e99032d8364f2975.jpg)
27
+
28
+ ![](images/b9b7477963bc19c3f41a43da526b44633b39f1d072e11a7c353b04e22f7f1803.jpg)
29
+
30
+ ![](images/d4fe8803060bb0e6cc9ee819810674d6e05534a70f15b5cc434f075331cf2c05.jpg)
31
+ Figure 2: Plots depicting the evaluation of zero-shot proxy metrics on 500 randomly sampled architectures from the FlexiBERT search space. As in literature (Serianni & Kalita, 2023), we use the GLUE score of each neural network as the ground truth and evaluate the performance of each zero-shot proxy metric based on its ranking correlation with the ground truth. The specific calculations of PCA is described in Section 3.2, and the respective zero-shot proxies used for the comparison are summarized in Section 2.3. Our metric W-PCA is calculated as the product of the number of parameters (#params) and the principal component analysis (PCA).
32
+
33
+ ![](images/2fdf9c21fcc3750f8b9ed83b03bfeb4c792896a08052597986d3835db7bf2b49.jpg)
34
+
35
+ ![](images/d360b3f942508fe90e1db85bf62b385712a3e8e049e898921232214c7a4bcb90.jpg)
36
+
37
+ ![](images/2d20886e6951b3b9c8fc414b83c361dc1f8eb8e6fadcf9f2d4452a6c3070d334.jpg)
38
+
39
+ but these methods were computationally expensive. Subsequently, one-shot NAS methods, such as gradient-based (Liu et al., 2018) and single path one-shot (SPOS) methods (Guo et al., 2020), were proposed. These methods are more efficient as they leverage pre-trained models or parameter sharing, requiring the establishment of a supernet in advance from which to sample the optimal subnetworks. Importantly, many lightweight model search tasks in natural language understanding (NLU) are accomplished using one-shot NAS (Xu et al., 2021; Dong et al., 2021; Gao et al., 2022). While one-shot NAS reduces training costs compared to training from scratch, it still requires various training strategies to effectively train the supernet. However, to further enhance search efficiency, it is necessary to introduce zero-shot NAS (Mellor et al., 2021a). Zero-shot, also known as training-free NAS, is a promising approach that eliminates the need for training neural networks and directly evaluates their performance using proxy metrics. This significantly reduces the training time and computational resources required for NAS.
40
+
41
+ Existing zero-shot NAS methods (Abdelfattah et al., 2020; Wei et al., 2024; Dong et al., 2023a;b) have primarily focused on ranking correlations on NAS benchmark datasets (Klyuchnikov et al., 2022), with limited consideration for specific deep learning tasks. This limitation hinders their applicability and effectiveness in practical scenarios. Additionally, these methods often solely consider a single feature of the models, leading to biased evaluations and potentially overlooking important characteristics.
42
+
43
+ In our research, we aim to address these limitations and improve the applicability of zero-shot NAS. In Section 5, we conducted ranking correlation experiments. As illustrated in Figure 2, our attempts to incorporate previous zero-shot proxies into language model evaluation yielded unsatisfactory results. However, we observed a strong correlation in ranking between principal component analysis (PCA) and the number of parameters (#params), with their product demonstrating even better performance.
44
+
45
+ Motivated by these findings, we propose a novel approach called Weight-Weighted PCA (W-PCA), which takes into account both the parameter count and the number of principal components with cumulative contribution exceeding a given $\eta$ threshold in the model. By integrating these two factors, our aim is to achieve a more accurate evaluation of language models in the context of zero-shot NAS. Furthermore, we have designed a search space specifically for NLU tasks and applied our designed zero-shot proxies, as well as the previous zero-shot proxies used in Transformer, to this search space. To the best of our knowledge, this is the first work that applies zero-shot NAS to NLU tasks.
46
+
47
+ # 2 RELATED WORK
48
+
49
+ # 2.1 LIGHTWEIGHT BERT MODELS
50
+
51
+ Turc et al. observed that distillation and pre-training + fine-tuning have mutually reinforcing effects. DistilBERT (Sanh et al., 2020) utilizes a triple loss function for training the lightweight model. TinyBERT (Jiao et al., 2020) applies distillation in both the pre-training and task-specific learning phases. MobileBERT (Sun et al., 2020) proposes a bottleneck structure to reduce the parameter count. MiniLM (Wang et al., 2020) introduces a compression method called deep self-attentive distillation. In this study, we incorporate both the standard BERT-base (Devlin et al., 2019) and MobileBERT models, along with their weight-sharing variations, where each layer is integrated into the supernet.
52
+
53
+ # 2.2 ONE-SHOT NAS FOR EFFICIENT MODELS
54
+
55
+ Numerous methods have been proposed for performing neural architecture search (NAS) (Hu et al., 2021; Dong et al., 2022; Sun et al., 2024; Li et al., 2024d;c) to develop efficient models. NAS-BERT (Xu et al., 2021) trains a large supernet on a carefully designed search space that includes diverse architectures, generating multiple compressed models with adaptable sizes and latency. EfficientBERT (Dong et al., 2021) proposes a three-stage coarse-to-fine search scheme to optimize the combination of the multilayer perceptron (MLP) in the feed-forward network (FFN), ultimately reducing the parameter count of the FFN. AutoBERT-Zero (Gao et al., 2022) devises a search space that includes unary and binary math operators for constructing attention structures and backbones for general pre-trained language models (PLMs) from scratch. To the best of our knowledge, it is the most recent NAS method that incorporates lightweight BERT models in the experiment.
56
+
57
+ # 2.3 ZERO-SHOT NAS
58
+
59
+ Zero-shot NAS has been applied to transformer-based architectures in several ways. We provide a summary of these applications below.
60
+
61
+ Synaptic Saliency (Tanaka et al., 2020) aims to prevent layer collapse during network pruning, as this collapse can significantly reduce the accuracy of the network. The formulation for this approach is expressed as follows:
62
+
63
+ $$
64
+ S (\theta) = \frac {\partial \mathcal {L}}{\partial \theta} \odot \theta
65
+ $$
66
+
67
+ where $\mathcal{L}$ represents the loss function, $\theta$ denotes the network's parameters, and $\odot$ is the Hadamard product. Abdelfattah et al. generalize synaptic saliency as a zero-shot metric for NAS by summing over all $n$ parameters in the network: $S = \sum_{i=1}^{n} S(\theta_i)$
68
+
69
+ Synaptic Diversity builds upon previous research on rank collapse in transformers. In this phenomenon, the output of a multihead attention block tends to converge to rank 1 for a given set of inputs, which significantly impairs the performance of the transformer. Zhou et al. propose a method that utilizes the nuclear norm of an attention head's weight matrix $W_{m}$ as an approximation of its rank. This approach leads to the computation of the synaptic diversity score as follows:
70
+
71
+ $$
72
+ S _ {D} = \sum_ {m} \| \frac {\partial \mathcal {L}}{\partial W _ {m}} \| _ {n u c} \odot \| W _ {m} \| _ {n u c}
73
+ $$
74
+
75
+ Activation Distance is a proxy metric introduced by Mellor et al. to assess the ReLU activations of a network. By computing the Hamming distance between the activations within the initialized network for each input in a minibatch, this metric determines the similarity of the activation maps. The authors observe that when the activation maps for a given set of inputs exhibit higher similarity, the network faces greater difficulty in disentangling the input representations during the training process.
76
+
77
+ Jacobian Covariance evaluates the Jacobian $J = \left(\frac{\partial L}{\partial\mathbf{x}_1},\dots ,\frac{\partial L}{\partial\mathbf{x}_N}\right)$ of the network's loss function with respect to the minibatch inputs. Further details of this metric can be found in the original paper (Mellor et al., 2021b).
78
+
79
+ Jacobian Cosine (Celotti et al., 2020) is proposed as an improvement to the Jacobian Covariance metric, aiming to enhance computation speed and effectiveness. This improvement involves utilizing
80
+
81
+ cosine similarity instead of a covariance matrix to measure similarity. The metric is computed as follows:
82
+
83
+ $$
84
+ S = 1 - \frac {1}{N ^ {2} - N} \sum_ {i = 1} ^ {N} | J _ {n} J _ {n} ^ {T} - I | ^ {\frac {1}{2 0}}
85
+ $$
86
+
87
+ Here, $J_{n}$ represents the normalized Jacobian, and $I$ is the identity matrix. The metric is computed using a minibatch of $N$ inputs. In their large noise and more noised scores, the authors introduce various noise levels to the input minibatch, hypothesizing that architectures exhibiting high accuracy will demonstrate robustness against noise.
88
+
89
+ Attention Confidence, Importance, and Softmax Confidence "Confident" attention heads exhibit high attention towards a single token, indicating their potential importance to the transformer's task. Researchers have proposed different approaches to calculating confidence, including examining the softmax layer of the attention head and analyzing the sensitivity of the attention head to weight masking by computing the product between the attention head's output and the gradient of its weights. Serianni & Kalita summarize the findings from (Voita et al., 2019; Behnke & Heafield, 2020; Michel et al., 2019) regarding the following metrics:
90
+
91
+ Confidence: $A_{h}(\mathbf{X}) = \frac{1}{N}\sum_{n = 1}^{N}|\max (Att_{\mathrm{h}}(\mathbf{x}_n))|$
92
+
93
+ Softmax Confidence: $A_{h}(\mathbf{X}) = \frac{1}{N}\sum_{n = 1}^{N}|\max (\sigma_{\mathrm{h}}(\mathbf{x}_n))|$
94
+
95
+ Importance: $A_{h}(\mathbf{X}) = |Att_{\mathrm{h}}(\mathbf{X})\frac{\partial\mathcal{L}(\mathbf{X})}{\partial Att_{\mathrm{h}}(\mathbf{X})}|$
96
+
97
+ where $X = \{x_{n}\}_{n = 1}^{N}$ represents a minibatch of $N$ inputs, $\mathcal{L}$ denotes the loss function of the model, and $Att_{h}$ and $\sigma_{\mathrm{h}}$ denote an attention head and its softmax, respectively. To obtain an overall metric for the entire network, Serianni & Kalita extend these scores by averaging them across all $H$ attention heads: $\mathcal{A}(\mathbf{X}) = \sum_{h = 1}^{H}\frac{1}{H} Att_{\mathrm{h}}(\mathbf{X})$
98
+
99
+ # 3 OUR GRADIENT-FREE WEIGHT-WEIGHTED PCA PROXY
100
+
101
+ # 3.1 MOTIVATION FOR USING PCA
102
+
103
+ ![](images/3ad8f8f44c7e8117a5af1273929eec7c96d29f9e8a162f7f6b5b2075af597bcf.jpg)
104
+ (a)
105
+
106
+ ![](images/a65e8f76e429e8635bcf6b98d6996194f31ddaeb59c6f3806da3ba04a7c17857.jpg)
107
+ (b)
108
+ Figure 3: (a) and (b) show the PCA score curves for BERT (Devlin et al., 2019) and MobileBERT (Sun et al., 2020), respectively, at different epochs during training $(\eta = 0.99)$ . (c) presents the progression of GLUE scores for BERT and MobileBERT over training epochs.
109
+
110
+ ![](images/185ce1ce3fae31e516ed541b25dc40a3dd3528827b74697b56fa4338da573d9e.jpg)
111
+ (c)
112
+
113
+ Inspired by the work in Pan et al. (2023), which leverages PCA to optimize the training of Vision Transformers, we analyzed the trends in PCA variation during the training of BERT and MobileBERT. From Figure 3, we can draw the following conclusions:
114
+
115
+ 1. Tracking performance via PCA. As shown in Figure 3, the overall GLUE score and the PCA values for each layer progressively increase with the number of training epochs. This indicates that PCA values effectively reflect the performance of the neural network. Specifically, the steady rise in PCA values suggests that the network's internal representations become more structured and discriminative.
116
+ 2. Diminishing returns after peak performance. Our observations also revealed that after reaching peak values, the PCA curves flatten out, indicating that further training yields diminishing returns.
117
+
118
+ This aligns with the conclusions in Table 13, where we discussed that prolonged training results in minimal performance gains.
119
+
120
+ 3. Early PCA values as predictors. Notably, layers with higher PCA values at epoch 0 (i.e., before training begins) tend to maintain higher PCA values throughout training. This led us to hypothesize that the initial PCA values could serve as effective indicators for comparing neural network architectures during the NAS phase.
121
+
122
+ To validate these hypotheses, we conducted experiments as shown in Figure 2, which confirmed that early PCA values correlate well with the final performance ranking of the networks. This insight motivated us to further pursue accuracy comparison experiments, yielding promising results.
123
+
124
+ # 3.2 VANILLA PCA PROXY
125
+
126
+ For a given $\eta$ , the distribution of PCA principal component values reflects the proportion of useful information in the matrix. We attempt to use this distribution as a metric for evaluating neural network performance. Our metric is computed as follows:
127
+
128
+ $$
129
+ S _ {f} (\mathbf {X}) = \operatorname {P C A} _ {-} \dim (\mathbf {X}, \eta) \tag {1}
130
+ $$
131
+
132
+ Here, $\mathbf{X} \in \mathbb{R}^{B \times N \times D}$ represents a minibatch of inputs, where $B$ represents the batch size, $N$ is the token length, and $D$ represents the embedding dimension. $\eta$ represents the cumulative contribution rate of principal components. We calculate the PCA values for the hidden states after the initial linear transformation in the FFN layer. Specifically, if we express the FFN layer as:
133
+
134
+ $$
135
+ \operatorname {F F N} (\mathbf {X}) = \sigma \left(\mathbf {X} \mathbf {W} _ {\mathbf {1}} + b _ {1}\right) \mathbf {W} _ {\mathbf {2}} + b _ {2} \tag {2}
136
+ $$
137
+
138
+ Then, we compute the PCA value of the $\mathbf{X}\mathbf{W}_{\mathbf{1}} + b_{1}$ part, denoted as $\mathbf{H} \in \mathbb{R}^{B \times N \times D'}$ in the following text, where $D'$ represents the hidden dimension. To compute the PCA values, we first reshape $H$ into a two-dimensional matrix $\mathbf{H}' \in \mathbb{R}^{(BN) \times D'}$ . Then, we subtract the mean of each feature (column) from the data to center it:
139
+
140
+ $$
141
+ \mathbf {H} _ {\text {c e n t e r e d}} = \mathbf {H} ^ {\prime} - \mathbf {H} ^ {\prime}. \text {m e a n} (0) \tag {3}
142
+ $$
143
+
144
+ We then calculate the covariance matrix of the centered data as:
145
+
146
+ $$
147
+ \mathbf {C} = \frac {1}{(B N) - 1} \mathbf {H} _ {\text {c e n t e r e d}} ^ {T} \mathbf {H} _ {\text {c e n t e r e d}} \tag {4}
148
+ $$
149
+
150
+ Next, we perform an eigenvalue decomposition on the covariance matrix $\mathbf{C} \in \mathbb{R}^{D' \times D'}$ to obtain eigenvalues $(\Lambda)$ and eigenvectors $(\mathbf{V})$ :
151
+
152
+ $$
153
+ \mathbf {C} = \mathbf {V} \boldsymbol {\Lambda} \mathbf {V} ^ {T} \tag {5}
154
+ $$
155
+
156
+ Here, $\Lambda$ is a diagonal matrix of eigenvalues and $\mathbf{V}$ is the matrix of eigenvectors. After sorting each eigenvalue $\lambda_{i}$ in descending order, we determine the minimum number of eigenvectors $k$ required to explain at least $\eta$ variance:
157
+
158
+ $$
159
+ k = \min \left\{k ^ {\prime} \mid \frac {\sum_ {i = 1} ^ {k ^ {\prime}} \lambda_ {i}}{\sum_ {i = 1} ^ {D ^ {\prime}} \lambda_ {i}} \geq \eta \right\} \tag {6}
160
+ $$
161
+
162
+ This value of $k$ represents the required PCA_dim(X, η). By analyzing PCA_dim (the dimensions with PCA values exceeding a threshold $\eta$ ), we can identify the dimensions that contain a higher amount of valuable information.
163
+
164
+ The metric for an $m$ -layer neural network model is obtained by summing $S_{f}(\mathbf{X})$ over all layers, resulting in:
165
+
166
+ $$
167
+ S (\mathbf {X}) = \sum_ {f = 1} ^ {m} S _ {f} (\mathbf {X}) \tag {7}
168
+ $$
169
+
170
+ Here, the metric $S_{f}(\mathbf{X})$ represents the PCA-based value for a specific layer $f$ .
171
+
172
+ Finally, to compute the overall metric $S(\mathbf{X})$ for an $m$ -layer neural network model, we sum the layer-specific metrics $S_{f}(\mathbf{X})$ over all layers. By utilizing this methodology, we can effectively assess the performance of candidate architectures based on their PCA values and identify the dimensions that contribute significantly to the valuable information in the hidden states.
173
+
174
+ # 3.3 WEIGHT-WEIGHTED PCA PROXY
175
+
176
+ It is extremely challenging to discover or design a proxy that outperforms weight parameters (#Params) in terms of stability and performance (Li et al., 2023a). After identifying PCA as an excellent proxy, multiplying it with #Params is a worthwhile attempt, as the additional computation time for a proxy is negligible compared to training neural networks. Thus, we propose a new metric called W-PCA, which quantifies the amount of valuable information captured by each dimension relative to the number of parameters in the architecture.
177
+
178
+ The W-PCA metric is computed as the product of the number of weight parameters $(w)$ and the PCA value for each dimension. Mathematically, it can be expressed as:
179
+
180
+ $$
181
+ \operatorname {W} - \operatorname {P C A} (\mathbf {X}) = w \times S (\mathbf {X}) \tag {8}
182
+ $$
183
+
184
+ Advantages of our method include:
185
+
186
+ 1. **Strong Correlation:** The W-PCA metric captures the relationship between the number of parameters and the valuable information in each dimension. This relevance is crucial in evaluating the efficiency and effectiveness of candidate architectures. By considering the PCA values, we can identify dimensions that contribute the most to the architecture's performance, allowing for informed decision-making during architecture search.
187
+
188
+ 2. Gradient-Free: Unlike many traditional optimization methods that rely on gradients, our methodology is gradient-free. This eliminates the need for extensive backpropagation and derivative calculations, making the evaluation process more efficient and less computationally expensive.
189
+
190
+ 3. One forward propagation only: Our methodology requires only forward propagation during the evaluation of candidate architectures. This simplifies the implementation and reduces the computational overhead, as it avoids the need for complex and resource-intensive operations such as backpropagation.
191
+
192
+ By leveraging the advantages of strong relevance, gradient-freeness, and the use of only forward propagation, our methodology based on the W-PCA metric provides an efficient and effective approach for training-free architecture search. It enables researchers and practitioners to evaluate candidate architectures based on their valuable information content relative to the number of parameters, facilitating the exploration of architecture design space and aiding in the development of more efficient and effective models.
193
+
194
+ # 4 SEARCH SPACE FOR NLU TASKS
195
+
196
+ To enhance the evaluation of W-PCA's performance on NLU tasks, we have meticulously crafted a search space. Drawing inspiration from SPOS (Guo et al., 2020), our search targets a model comprised of multiple layers, with each layer capable of being a lightweight BERT model. The hidden dimension of the FFN layer within each BERT block is determined through random selection. This deliberate randomness aids in exploring a diverse range of architectures during the search process.
197
+
198
+ # 5 RANKING EVALUATION
199
+
200
+ # 5.1 DATASETS
201
+
202
+ To assess the accuracy of the proposed proxy indicators for neural network evaluation, we employed a benchmark consisting of a well-trained BERT structure suggested by Serianni & Kalita as the
203
+
204
+ ![](images/6963148f418878f77680ba188146ef0c54ee4661971c8af71ff877e98e2702cf.jpg)
205
+ Figure 4: Overview of the W-PCA framework for NLU tasks. The search space consists of $m$ layers, each with 2 candidate blocks and $n$ candidate dimensions, resulting in a total of $(2 \times n)^{m}$ combinations. A genetic algorithm (detailed parameterization provided in Section 6.2.1) is employed to identify the optimal structure with the highest W-PCA value. This structure is subsequently refined through additional training using knowledge distillation (KD). In the figure, FFN and MHA represent the feed-forward network and multi-head attention, respectively.
206
+
207
+ testing dataset. Specifically, this benchmark selected 500 structures from the FlexiBERT (Tuli et al., 2023) search space (as presented in Table 7) and utilized ELECTRA (Clark et al., 2020), rather than the MLM method, for training to efficiently pretrain a compact BERT model. The training dataset comprised 8,013,769 documents sourced from the OpenWebText (Gokaslan et al., 2019) corpus, amounting to a total of 38GB. For detailed training information, please refer to Appendix B. After training, the scores obtained by fine-tuning on the GLUE dataset will serve as the reference for evaluating the correlation among different zero-shot proxies.
208
+
209
+ # 5.2 RESULTS AND ANALYSIS
210
+
211
+ Table 1: Comparison of different zero-shot proxies on the FlexiBERT benchmark. "Time" represents the computation time for the metric calculated 1,000 times. Both $\tau$ and $\rho$ are computed to measure the ranking correlation between each zero-shot proxy and the ground truth of the neural networks, represented by their BLEU scores
212
+
213
+ <table><tr><td>Proxy</td><td>Time</td><td>∇-free</td><td>τ</td><td>ρ</td></tr><tr><td>Synaptic Diversity (Zhou et al., 2022)</td><td>110 s</td><td>✗</td><td>0.021</td><td>0.174</td></tr><tr><td>Synaptic Saliency (Abdelfattah et al., 2020)</td><td>121 s</td><td>✗</td><td>0.130</td><td>0.185</td></tr><tr><td>Activation Distance (Mellor et al., 2021a)</td><td>68 s</td><td>✓</td><td>0.081</td><td>0.123</td></tr><tr><td>Jacobian Cosine (Celotti et al., 2020)</td><td>103 s</td><td>✗</td><td>0.116</td><td>0.149</td></tr><tr><td>Head Importance (Serianni &amp; Kalita, 2023)</td><td>112 s</td><td>✗</td><td>0.048</td><td>0.170</td></tr><tr><td>Head Confidence (Serianni &amp; Kalita, 2023)</td><td>81 s</td><td>✓</td><td>0.306</td><td>0.364</td></tr><tr><td>Vanilla PCA</td><td>61 s</td><td>✓</td><td>0.466</td><td>0.677</td></tr><tr><td>W-PCA</td><td>74 s</td><td>✓</td><td>0.526</td><td>0.698</td></tr></table>
214
+
215
+ The evaluation results include the Kendall rank correlation coefficient (Kendall $\tau$ ) and the Spearman rank correlation coefficient (Spearman $\rho$ ). Table 1 demonstrates that Vanilla PCA has already surpassed the previous zero-shot proxy in terms of ranking correlation, and W-PCA performs even better than Vanilla PCA. Furthermore, W-PCA achieves higher computational efficiency than proxies due to the absence of gradient computation. In Appendix C, we compare the ranking stability of W-PCA with other zero-shot metrics using different initialization weights and batch inputs.
216
+
217
+ # 6 ACCURACY COMPARISION
218
+
219
+ # 6.1 DATASETS
220
+
221
+ To enable an accurate comparison to other lightweight BERT models, we evaluate the performance of W-PCA using the GLUE (Wang et al., 2018) and SQuAD (Rajpurkar et al., 2016) datasets and their corresponding task-specific evaluations.
222
+
223
+ Table 2: Performance comparison of the test set on the GLUE benchmark. The performance of all zero-shot proxies is evaluated on the search space depicted in Figure 4. Latency measurements of the models are conducted using the NVIDIA A100 GPU.
224
+
225
+ <table><tr><td>Model</td><td>Type</td><td>#Params</td><td>Latency</td><td>QNLI</td><td>MRPC</td><td>SST-2</td><td>CoLA</td><td>STS-B</td><td>MNLI-m/mm</td><td>RTE</td><td>QQP</td><td>AVG</td></tr><tr><td>BERT-base (Devlin et al., 2019)</td><td>manual</td><td>108.9M</td><td>274ms</td><td>90.5</td><td>88.9</td><td>93.5</td><td>52.1</td><td>85.8</td><td>84.6/83.4</td><td>66.4</td><td>71.2</td><td>79.6</td></tr><tr><td>BERT-base (ours)</td><td>manual</td><td>108.9M</td><td>274ms</td><td>91.4</td><td>88.7</td><td>93.0</td><td>49.0</td><td>87.5</td><td>84.9/83.9</td><td>76.6</td><td>71.3</td><td>80.7</td></tr><tr><td>BERT-tiny (Turc et al., 2019)</td><td>manual</td><td>14.5M</td><td>44ms</td><td>84.8</td><td>83.2</td><td>87.6</td><td>19.5</td><td>77.1</td><td>75.4/74.9</td><td>62.6</td><td>66.5</td><td>70.2</td></tr><tr><td>BERT-small (Turc et al., 2019)</td><td>manual</td><td>28.8M</td><td>79ms</td><td>86.4</td><td>83.4</td><td>89.7</td><td>27.8</td><td>77.0</td><td>77.6/77.0</td><td>61.8</td><td>68.1</td><td>72.1</td></tr><tr><td>DistilBERT-6 (Sanh et al., 2020)</td><td>manual</td><td>67.0M</td><td>151ms</td><td>88.9</td><td>86.9</td><td>92.5</td><td>49.0</td><td>81.3</td><td>82.6/81.3</td><td>58.4</td><td>70.1</td><td>76.8</td></tr><tr><td>TinyBERT-4 (Jiao et al., 2020)</td><td>manual</td><td>14.5M</td><td>45ms</td><td>87.7</td><td>88.5</td><td>91.2</td><td>27.2</td><td>83.0</td><td>81.8/80.7</td><td>64.9</td><td>69.6</td><td>75.0</td></tr><tr><td>MobileBERT-tiny (Sun et al., 2020)</td><td>manual</td><td>15.1M</td><td>62ms</td><td>89.5</td><td>87.9</td><td>91.7</td><td>46.7</td><td>80.1</td><td>81.5/81.6</td><td>65.1</td><td>68.9</td><td>77.0</td></tr><tr><td>EfficientBERT+ (Dong et al., 2021)</td><td>one-shot</td><td>15.7M</td><td>62ms</td><td>89.3</td><td>89.9</td><td>92.4</td><td>38.1</td><td>85.1</td><td>83.0/82.3</td><td>69.4</td><td>71.2</td><td>77.9</td></tr><tr><td>EfficientBERT++ (Dong et al., 2021)</td><td>one-shot</td><td>16.0M</td><td>65ms</td><td>90.6</td><td>88.9</td><td>92.3</td><td>42.5</td><td>83.6</td><td>83.0/82.5</td><td>67.8</td><td>71.2</td><td>78.0</td></tr><tr><td>Synaptic Saliency (Abdelfattah et al., 2020)</td><td>zero-shot</td><td>15.7M</td><td>58ms</td><td>89.4</td><td>88.1</td><td>91.0</td><td>33.6</td><td>83.1</td><td>82.6/81.1</td><td>70.6</td><td>70.3</td><td>76.6</td></tr><tr><td>Activation Distance (Mellor et al., 2021a)</td><td>zero-shot</td><td>15.6M</td><td>60ms</td><td>88.9</td><td>87.6</td><td>91.2</td><td>30.7</td><td>82.9</td><td>81.1/80.4</td><td>70.4</td><td>70.1</td><td>75.9</td></tr><tr><td>Synaptic Diversity (Zhou et al., 2022)</td><td>zero-shot</td><td>15.6M</td><td>57ms</td><td>88.3</td><td>88.1</td><td>91.5</td><td>25.8</td><td>84.7</td><td>81.3/80.2</td><td>70.6</td><td>70.3</td><td>75.6</td></tr><tr><td>Head Confidence (Seriani &amp; Kalita, 2023)</td><td>zero-shot</td><td>15.6M</td><td>63ms</td><td>89.5</td><td>88.3</td><td>92.4</td><td>31.7</td><td>85.7</td><td>82.8/81.9</td><td>74.0</td><td>70.9</td><td>77.5</td></tr><tr><td>Softmax Confidence (Seriani &amp; Kalita, 2023)</td><td>zero-shot</td><td>15.6M</td><td>61ms</td><td>88.4</td><td>87.5</td><td>90.8</td><td>32.5</td><td>83.5</td><td>81.2/80.5</td><td>70.3</td><td>69.9</td><td>76.1</td></tr><tr><td>W-PCA-Tiny</td><td>zero-shot</td><td>9.6M</td><td>38ms</td><td>88.7</td><td>87.6</td><td>91.9</td><td>27.4</td><td>84.8</td><td>81.1/79.8</td><td>71.1</td><td>70.3</td><td>75.9</td></tr><tr><td>W-PCA-Small</td><td>zero-shot</td><td>15.6M</td><td>54ms</td><td>90.3</td><td>88.7</td><td>91.5</td><td>38.4</td><td>86.4</td><td>82.8/82.2</td><td>73.8</td><td>70.8</td><td>78.3</td></tr></table>
226
+
227
+ # 6.2 IMPLEMENTATION DETAILS
228
+
229
+ # 6.2.1 SEARCH SPACE
230
+
231
+ The search space is illustrated in Figure 4, wherein we set the values of $m$ and $n$ to 12 and 6, respectively. Each block possesses a hidden size of 528, with the inner hidden size of the MobileBERT series blocks being one-fourth of the total hidden size. The hidden dimensions of the FFN increase by a factor of 132 for each multiple ranging from 1 to $n$ . To calculate the value of PCA_dim, we set $\eta$ to 0.99, with the selection process detailed in Appendix D. To identify the combination of blocks that yields the highest PCA value, we utilize a genetic algorithm, the detailed implementation of which can be found in Appendix E. This algorithm uses a population size of 50 and a generation count of 40. The crossover probability is set to 1, the mutation probability to 0.1, and the upper limit for the model parameters is set to 15.7M, resulting in the W-PCA-Small model. By further reducing the upper limit for the model parameters to 10M and halving the number of layers ( $m$ ), we obtain the W-PCA-Tiny model.
232
+
233
+ # 6.2.2 TRAINING
234
+
235
+ Once we obtain the desired architecture, we pretrain the model using the complete English Wikipedia (Devlin et al., 2019) and BooksCorpus (Zhu et al., 2015). We then proceed to fine-tune the model on each individual downstream task. During pretraining, the network is trained with a batch size set to 256. For the fine-tuning phase of the downstream tasks, the network is trained with a batch size set to 32. The CoLA task is trained for 50 epochs, while the other tasks are trained for 10 epochs. The learning rate is set at 0.0001 during pretraining. In the fine-tuning phase, the learning rate is set at 0.00005 for GLUE tasks and 0.0001 for SQuAD tasks. The training process utilizes the Adam optimizer with $\beta_{1}$ and $\beta_{2}$ values set at 0.9 and 0.999, respectively. The weight decay is set to 0.01. The learning rate decays linearly with a warm-up ratio set to 0.1. The KD loss function used in our approach is described in Appendix F.
236
+
237
+ # 6.3 RESULTS ON GLUE
238
+
239
+ # 6.3.1 MODEL ACCURACY AND LATENCY
240
+
241
+ Table 2 presents the results of the GLUE scores and model latency for the KD-based methods. Among them, except for the BERT-base teacher model we used ourselves, the results of all the manual and one-shot methods in the table are from relevant papers. Since zero-shot NAS methods have not been used in NLU tasks before, we applied the recent top-performing zero-shot proxy approaches on Transformer language models to the search space shown in Figure 4.
242
+
243
+ As shown in Table 2, under the search space depicted in Figure 4, our W-PCA metric achieved higher average scores on the GLUE test set compared to all baseline manual and one-shot methods. At the same time, it outperformed the previous state-of-the-art (SOTA) method EfficientBERT (Dong et al., 2021) in terms of parameter count, latency, and average score in the field of lightweight models.
244
+
245
+ Table 3: Comparison of results on the GLUE dev set with other NAS methods. The "Time" column represents the GPU days consumed by the NAS method search. It is not feasible to make a subcomparison with AutoBERT-Zero-small as it does not provide individual scores for each task in the GLUE dev set.
246
+
247
+ <table><tr><td>Model</td><td>#Params</td><td>Time</td><td>QNLI</td><td>MRPC</td><td>SST-2</td><td>CoLA</td><td>STS-B</td><td>MNLI-m</td><td>RTE</td><td>QQP</td><td>AVG</td></tr><tr><td>NAS-BERT-10 (Xu et al., 2021)</td><td>10.0M</td><td>96 d</td><td>86.3</td><td>79.1</td><td>88.6</td><td>34.0</td><td>84.8</td><td>76.4</td><td>66.6</td><td>88.5</td><td>75.5</td></tr><tr><td>NAS-BERT-30 (Xu et al., 2021)</td><td>30.0M</td><td>96 d</td><td>88.4</td><td>84.6</td><td>90.5</td><td>48.7</td><td>87.6</td><td>81.0</td><td>71.8</td><td>90.2</td><td>80.3</td></tr><tr><td>EfficientBERT-TINY (Dong et al., 2021)</td><td>9.4M</td><td>58 d</td><td>89.3</td><td>90.1</td><td>90.1</td><td>39.1</td><td>79.9</td><td>81.7</td><td>63.2</td><td>86.7</td><td>77.5</td></tr><tr><td>EfficientBERT (Dong et al., 2021)</td><td>15.7M</td><td>58 d</td><td>90.4</td><td>91.5</td><td>91.3</td><td>50.2</td><td>82.5</td><td>83.1</td><td>66.8</td><td>87.3</td><td>80.4</td></tr><tr><td>AutoBERT-Zero-small (Gao et al., 2022)</td><td>13.0M</td><td>~1,000 d</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>80.5</td></tr><tr><td>Synaptic Diversity (Zhou et al., 2022)</td><td>15.6M</td><td>0.7 d</td><td>88.9</td><td>87.6</td><td>91.4</td><td>32.0</td><td>84.1</td><td>81.0</td><td>73.4</td><td>88.2</td><td>78.3</td></tr><tr><td>Head Confidence (Seriani &amp; Kalita, 2023)</td><td>15.6M</td><td>0.5 d</td><td>90.1</td><td>89.7</td><td>92.4</td><td>37.5</td><td>84.1</td><td>82.5</td><td>75.9</td><td>89.1</td><td>80.2</td></tr><tr><td>Softmax Confidence (Seriani &amp; Kalita, 2023)</td><td>15.6M</td><td>0.5 d</td><td>89.4</td><td>88.3</td><td>92.0</td><td>32.6</td><td>84.7</td><td>81.6</td><td>73.9</td><td>88.9</td><td>78.9</td></tr><tr><td>W-PCA-Tiny</td><td>9.6M</td><td>0.4 d</td><td>89.2</td><td>89.2</td><td>92.0</td><td>33.2</td><td>84.0</td><td>80.5</td><td>71.1</td><td>88.0</td><td>78.4</td></tr><tr><td>W-PCA-Small</td><td>15.6M</td><td>0.5 d</td><td>90.8</td><td>90.5</td><td>92.8</td><td>44.0</td><td>85.3</td><td>82.9</td><td>76.1</td><td>88.8</td><td>81.4</td></tr></table>
248
+
249
+ Additionally, W-PCA achieved the highest score on the STS-B task. It is worth noting that, in the same search space, the optimal structure found by W-PCA surpasses all previous zero-shot methods (Abdelfattah et al., 2020; Mellor et al., 2021a; Zhou et al., 2022; Serianni & Kalita, 2023) applied to Transformer language models, highlighting its exceptional ability in exploring optimal network structures in zero-shot NAS methods.
250
+
251
+ # 6.3.2 SEARCH EFFICIENCY
252
+
253
+ As shown in Table 3, under our search space, the search efficiency of all zero-shot proxies (including our W-PCA method) has been improved by two to three orders of magnitude compared to previous training-based NAS, and achieved competitive performance. The three zero-shot proxies, Synaptic Diversity (Zhou et al., 2022), Head Confidence (Serianni & Kalita, 2023), and Softmax Confidence (Serianni & Kalita, 2023), can compete with the optimal structures found by previous training-based NAS in our search space. Our W-PCA method surpasses all previous training-based methods in the field of lightweight language models in terms of average score and achieves the best average score. Moreover, in three out of eight tasks, W-PCA achieves the highest performance. Our method discovers the latest SOTA effects in the field of lightweight models with almost negligible search cost, reducing greenhouse gas $\mathrm{CO}_{2}$ emissions by two to three orders of magnitude $^{1}$ , and significantly improving the utilization of global energy resources.
254
+
255
+ It is also worth noting that in the internal comparison of zero-shot proxies, Head Confidence (Serianni & Kalita, 2023), Softmax Confidence (Serianni & Kalita, 2023), and our W-PCA method require shorter search time than the Synaptic Diversity (Zhou et al., 2022) method, which needs to compute gradients, by an additional 0.2 GPU days. Additionally, our W-PCA-Tiny model has a lower parameter limit set during the search, resulting in slightly lower computation time for the forward propagation of each neural network individual, thus reducing the search time by 0.1 GPU days compared to the W-PCA-Small model.
256
+
257
+ # 6.4 RESULTS ON SQUAD
258
+
259
+ Table 4: Results on SQuAD dev sets. *: our implementation.
260
+
261
+ <table><tr><td>Model</td><td>#Params</td><td>SQuAD v1.1 EM/F1</td><td>SQuAD v2.0 EM/F1</td></tr><tr><td>BERT-base</td><td>108.9M</td><td>80.8/88.5</td><td>-/-</td></tr><tr><td>BERT-base*</td><td>108.9M</td><td>80.7/88.2</td><td>75.7/78.7</td></tr><tr><td>TinyBERT-4</td><td>14.5M</td><td>72.7/82.1</td><td>68.2/71.8</td></tr><tr><td>MiniLM-6</td><td>22.9M</td><td>-/-</td><td>- / 72.7</td></tr><tr><td>EfficientBERT++</td><td>16.0M</td><td>78.3/86.5</td><td>73.0/76.1</td></tr><tr><td>W-PCA-Tiny</td><td>9.6M</td><td>74.6/83.5</td><td>69.0/72.1</td></tr><tr><td>W-PCA-Small</td><td>15.6M</td><td>78.4/86.7</td><td>73.3/76.8</td></tr></table>
262
+
263
+ We compared the W-PCA proposed in this article with manually designed lightweight models, namely TinyBERT (Jiao et al., 2020), MiniLM (Wang et al., 2020), and the one-shot NAS method EfficientBERT (Dong et al., 2021), on the SQuAD dataset. The results are presented in Table 4. Despite having fewer parameters than TinyBERT-4, MiniLM-6, and Efficient++, the W-PCA-Small model outperforms these methods in terms of both EM and F1 scores on both the SQuAD v1.1 and SQuAD v2.0 datasets. This observation demonstrates the robust adaptability of the investigated models across diverse datasets.
264
+
265
+ Table 5: Comparison results of W-PCA and its product counterparts as proxies on the GLUE dev set.
266
+
267
+ <table><tr><td>Proxy</td><td>#Params</td><td>QNLI</td><td>MRPC</td><td>SST-2</td><td>CoLA</td><td>STS-B</td><td>MNLI-m</td><td>RTE</td><td>QQP</td><td>AVG</td></tr><tr><td>#Params</td><td>15.7M</td><td>89.3</td><td>88.8</td><td>90.7</td><td>43.8</td><td>83.6</td><td>82.6</td><td>76.1</td><td>87.5</td><td>80.3</td></tr><tr><td>V-PCA</td><td>15.6M</td><td>89.9</td><td>91.4</td><td>92.7</td><td>39.4</td><td>84.9</td><td>82.9</td><td>76.0</td><td>88.9</td><td>80.8</td></tr><tr><td>W-PCA</td><td>15.6M</td><td>90.8</td><td>90.5</td><td>92.8</td><td>44.0</td><td>85.3</td><td>82.9</td><td>76.1</td><td>88.8</td><td>81.4</td></tr></table>
268
+
269
+ # 6.5 ABLATIONS
270
+
271
+ In order to investigate the effects of each component of W-PCA on the experimental results, we performed ablation experiments. Specifically, we utilized the components of W-PCA, as described in Equation (8) where the first component is the number of parameters (#Params) and the second component is the V-PCA value (defined in Equation (7)), as fitness values for the genetic algorithm to explore the optimal network structure in Section 6.2.1. We then compared the performance of the discovered network structures with W-PCA.
272
+
273
+ The results, presented in Table 5, demonstrate that by multiplying the number of parameters with the V-PCA value and using W-PCA as the zero-shot evaluation metric, the performance of the searched networks significantly improves compared to using either #Params or V-PCA alone as the evaluation metric.
274
+
275
+ Encouragingly, the incorporation of an extra feature does not necessitate a significant rise in computational time, thereby rendering the multiplication approach highly efficient. For further ablations, please refer to Appendix H.
276
+
277
+ # 7 CONCLUSION
278
+
279
+ In this paper, we propose W-PCA $^2$ , and significantly improving the utilization of global energy resources., a novel zero-shot NAS method specifically designed for lightweight language models. In the ranking correlation experiments conducted on the search space of FlexiBERT, W-PCA achieves a Kendall $\tau$ score that surpasses the previous method by 0.220 and a Spearman $\rho$ score that surpasses the previous method by 0.334. In the accuracy experiments conducted on GLUE and SQuAD, W-PCA not only achieves the highest score, but also significantly improves search efficiency. On the GLUE test set, W-PCA improves search efficiency by over a hundredfold compared to the previous best-performing one-shot NAS method, with an average score improvement of 0.3. On the GLUE dev set, W-PCA improves search efficiency by 2,000 times and achieves an average score improvement of 0.9 compared to the previous best-performing one-shot NAS method. In future work, we will extend our approach to more compression tasks (Dong et al., 2024a; Li et al., 2024b; e; Dong et al., 2024b). Our work contributes to the advancement of NAS methods for lightweight language models, enabling the design and optimization of efficient and effective systems for natural language processing.
280
+
281
+ Limitations This work focuses on the ranking correlation tasks commonly addressed in prior zero-shot NAS methods and on NLU tasks relevant to lightweight models. However, recent language model research increasingly centers on generative models with over 1B parameters. In Appendix I, we further discuss potential extensions related to these large-scale generative models.
282
+
283
+ # ACKNOWLEDGMENTS
284
+
285
+ Special thanks to Professor Yajun Ha from ShanghaiTech University. During the resubmission of this paper to ICLR, he helped us summarize the contributions of the paper and suggested a more suitable title.
286
+
287
+ # REFERENCES
288
+
289
+ Mohamed S Abdelfattah, Abhinav Mehrotra, Lukasz Dudziak, and Nicholas Donald Lane. Zero-cost proxies for lightweight nas. In International Conference on Learning Representations, 2020.
290
+
291
+ Maximiliana Behnke and Kenneth Heafield. Losing heads in the lottery: Pruning transformer attention in neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2664-2674, 2020.
292
+ Luca Celotti, Ismael Balafrej, and Emmanuel Calvet. Improving zero-shot neural architecture search with parameters scoring. 2020.
293
+ Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with $90\%$ * chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2023.
294
+ Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555, 2020.
295
+ Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. Free dolly: Introducing the world's first truly open instruction-tuned llm, 2023. URL https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm.
296
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019.
297
+ Chenhe Dong, Guangrun Wang, Hang Xu, Jiefeng Peng, Xiaozhe Ren, and Xiaodan Liang. Efficientbert: Progressively searching multilayer perceptron via warm-up knowledge distillation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 1424-1437, 2021.
298
+ Peijie Dong, Xin Niu, Lujun Li, Linzhen Xie, Wenbin Zou, Tian Ye, Zimian Wei, and Hengyue Pan. Prior-guided one-shot neural architecture search. arXiv preprint arXiv:2206.13329, 2022.
299
+ Peijie Dong, Lujun Li, and Zimian Wei. Diswot: Student architecture search for distillation without training. In CVPR, 2023a.
300
+ Peijie Dong, Lujun Li, Zimian Wei, Xin Niu, Zhiliang Tian, and Hengyue Pan. Emq: Evolving training-free proxies for automated mixed precision quantization. arXiv preprint arXiv:2307.10554, 2023b.
301
+ Peijie Dong, Lujun Li, Zhenheng Tang, Xiang Liu, Xinglin Pan, Qiang Wang, and Xiaowen Chu. Pruner-zero: Evolving symbolic pruning metric from scratch for large language models. In ICML, 2024a.
302
+ Peijie Dong, Lujun Li, Yuedong Zhong, Dayou Du, Ruibo Fan, Yuhan Chen, Zhenheng Tang, Qiang Wang, Wei Xue, Yike Guo, et al. Stbllm: Breaking the 1-bit barrier with structured binary llms. arXiv preprint arXiv:2408.01803, 2024b.
303
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
304
+ Jiahui Gao, Hang Xu, Han Shi, Xiaozhe Ren, LH Philip, Xiaodan Liang, Xin Jiang, and Zhenguo Li. Autobert-zero: Evolving bert backbone from scratch. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 10663-10671, 2022.
305
+ Aaron Gokaslan, Vanya Cohen, Ellie Pavlick, and Stefanie Tellex. Openwebtext corpus, 2019.
306
+ Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. Minillm: Knowledge distillation of large language models. In The Twelfth International Conference on Learning Representations, 2024.
307
+ Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVI 16, pp. 544-560. Springer, 2020.
308
+
309
+ Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689, 2022.
310
+ Yiming Hu, Xingang Wang, Lujun Li, and Qingyi Gu. Improving one-shot nas with shrinking-and-expanding supernet. Pattern Recognition, 2021.
311
+ Zi-Hang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, and Shuicheng Yan. Convbert: Improving bert with span-based dynamic convolution. Advances in Neural Information Processing Systems, 33:12837-12848, 2020.
312
+ Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. Tinybert: Distilling bert for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 4163-4174, 2020.
313
+ Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, Alexander Filippov, and Evgeny Burnaev. Nas-bench-nlp: neural architecture search benchmark for natural language processing. IEEE Access, 10:45736-45747, 2022.
314
+ Guihong Li, Yuedong Yang, Kartikeya Bhardwaj, and Radu Marculescu. Zico: Zero-shot nas via inverse coefficient of variation on gradients. In The Eleventh International Conference on Learning Representations, 2023a.
315
+ Lujun Li. Self-regulated feature learning via teacher-free feature distillation. In ECCV 2022, 2022.
316
+ Lujun Li and Zhe Jin. Shadow knowledge distillation: Bridging offline and online knowledge transfer. Advances in Neural Information Processing Systems, 2022.
317
+ Lujun Li, Peijie Dong, Anggeng Li, Zimian Wei, and Yang Ya. Kd-zero: Evolving knowledge distiller for any teacher-student pairs. In Thirty-seventh Conference on Neural Information Processing Systems, 2023b.
318
+ Lujun Li, Peijie Dong, Zimian Wei, and Ya Yang. Automated knowledge distillation via monte carlo tree search. In ICCV, 2023c.
319
+ Lujun Li, Yufan Bao, Peijie Dong, Chuanguang Yang, Anggeng Li, Wenhan Luo, Qifeng Liu, Wei Xue, and Yike Guo. Detkds: Knowledge distillation search for object detectors. In ICML, 2024a.
320
+ Lujun Li, Peijie, Zhenheng Tang, Xiang Liu, Qiang Wang, Wenhan Luo, Wei Xue, Qifeng Liu, Xiaowen Chu, and Yike Guo. Discovering sparsity allocation for layer-wise pruning of large language models. In NeuIPS, 2024b.
321
+ Lujun Li, Haosen Sun, Shiwen Li, Peijie Dong, Wenhan Luo, Wei Xue, Qifeng Liu, and Yike Guo. Auto-gas: Automated proxy discovery for training-free generative architecture search. ECCV, 2024c.
322
+ Lujun Li, Zimian Wei, Peijie Dong, Wenhan Luo, Wei Xue, Qifeng Liu, and Yike Guo. Attnzero: efficient attention discovery for vision transformers. In ECCV, 2024d.
323
+ Wei Li, Lujun Li, Mark Lee, and Shengjie Sun. Als: Adaptive layer sparsity for large language models via activation correlation assessment. In NeuIPS, 2024e.
324
+ Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In International Conference on Learning Representations, 2018.
325
+ Xiaolong Liu, Lujun Li, Chao Li, and Anbang Yao. Norm: Knowledge distillation via n-to-one representation matching. arXiv preprint arXiv:2305.13803, 2023.
326
+ Joe Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley. Neural architecture search without training. In International Conference on Machine Learning, pp. 7588-7598. PMLR, 2021a.
327
+ Joseph Mellor, Jack Turner, Amos Storkey, and Elliot J. Crowley. Neural architecture search without training, 2021b. URL https://openreview.net/forum?id=g4E6SAAvACo.
328
+ Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one?, 2019.
329
+
330
+ R OpenAI. Gpt-4 technical report. arXiv, pp. 2303–08774, 2023.
331
+ Xuran Pan, Xuan Jin, Yuan He, Shiji Song, Gao Huang, et al. Budgeted training for vision transformer. In The Eleventh International Conference on Learning Representations, 2023.
332
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
333
+ Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, volume 33, pp. 4780-4789, 2019.
334
+ Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2020.
335
+ Aaron Serianni and Jugal Kalita. Training-free neural architecture search for RNNs and transformers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2522-2540, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.142. URL https://aclanthology.org/2023.acl-long.142.
336
+ Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan Dey. SlimPajama: A 627B token cleaned and deduplicated version of RedPajama. https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama, 2023. URL https://huggingface.co/datasets/cerebras/SlimPajama-627B.
337
+ Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3645-3650, 2019.
338
+ Haosen Sun, Lujun Li, Peijie Dong, Zimian Wei, and Shitong Shao. Auto-das: Automated proxy discovery for training-free distillation-aware architecture search. ECCV, 2024.
339
+ Yanan Sun, Bing Xue, Mengjie Zhang, and Gary G Yen. Evolving deep convolutional neural networks for image classification. IEEE Transactions on Evolutionary Computation, 24(2):394-407, 2019.
340
+ Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. Mobilebert: a compact task-agnostic bert for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2158-2170, 2020.
341
+ Hidenori Tanaka, Daniel Kunin, Daniel L. K. Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow, 2020.
342
+ Shikhar Tuli, Bhishma Dedhia, Shreshth Tuli, and Niraj K Jha. Flexibert: Are current transformer architectures too homogeneous and rigid? Journal of Artificial Intelligence Research, 77:39-70, 2023.
343
+ Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962, 2019.
344
+ Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5797-5808, 2019.
345
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5446. URL https://aclanthology.org/W18-5446.
346
+ Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems, 33:5776-5788, 2020.
347
+
348
+ Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a.
349
+ Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Benchmarking generalization via in-context instructions on 1,600+ language tasks. arXiv e-prints, pp. arXiv-2204, 2022b.
350
+ Zimian Wei, Lujun Li, Peijie Dong, , Anggeng Li, Menglong Lu, Hengyue Pan, and Dongsheng Li. Auto-prox: Training-free vision transformer architecture search via automatic proxy discovery. AAAI, 2024.
351
+ Zhanghao Wu, Zhijian Liu, Ji Lin, Yujun Lin, and Song Han. Lite transformer with long-short range attention. In International Conference on Learning Representations, 2019.
352
+ Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. Nas-bert: task-agnostic and adaptive-size bert compression with neural architecture search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1933–1943, 2021.
353
+ Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
354
+ Qinqin Zhou, Kekai Sheng, Xiawu Zheng, Ke Li, Xing Sun, Yonghong Tian, Jie Chen, and Rongrong Ji. Training-free transformer architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10894-10903, 2022.
355
+ Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19-27, 2015.
356
+ Barret Zoph and Quoc Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2016.
357
+
358
+ Table 6: Performance comparison on the GLUE test set for a model composed of 12 identical blocks.
359
+
360
+ <table><tr><td>Block Type</td><td>#Params</td><td>QNLI</td><td>MRPC</td><td>SST-2</td><td>CoLA</td><td>STS-B</td><td>MNLI-m/mm</td><td>RTE</td><td>QQP</td><td>AVG</td></tr><tr><td>BERT (Devlin et al., 2019)</td><td>24.8M</td><td>90.9</td><td>89.0</td><td>92.9</td><td>46.7</td><td>87.2</td><td>84.4/83.5</td><td>71.0</td><td>71.9</td><td>79.7</td></tr><tr><td>MobileBERT (Sun et al., 2020)</td><td>10.4M</td><td>87.2</td><td>87.7</td><td>91.4</td><td>31.8</td><td>84.9</td><td>81.5/81.4</td><td>69.6</td><td>70.8</td><td>76.3</td></tr><tr><td>LiteTransformer (Wu et al., 2019)</td><td>34.6M</td><td>90.2</td><td>89.2</td><td>92.5</td><td>36.4</td><td>86.4</td><td>83.1/82.3</td><td>74.0</td><td>70.8</td><td>78.3</td></tr><tr><td>ConvBERT (Jiang et al., 2020)</td><td>37.4M</td><td>89.7</td><td>89.4</td><td>91.9</td><td>36.0</td><td>85.8</td><td>82.5/82.0</td><td>72.1</td><td>70.7</td><td>77.8</td></tr></table>
361
+
362
+ # A WHY BERT AND MOBILEBERT?
363
+
364
+ As shown in Table 6, we have attempted to train a model by composing 12 blocks of the same type, with a dimension of 528 for FFN. The results indicate that BERT and MobileBERT blocks outperform other blocks in terms of the unit parameter performance. Therefore, we have included these two fundamental blocks in our search space. In fact, if we construct a supernet as described in Section H.3, the optimal network structure selected by the genetic algorithm will also only choose BERT and MobileBERT blocks, without considering any other types of blocks.
365
+
366
+ # B TRAINING DETAILS OF THE FLEXIBERT SEARCH SPACE
367
+
368
+ Table 7: The FlexiBERT search space comprises a total of 10,621,440 architectures.
369
+
370
+ <table><tr><td colspan="2">Architecture Element</td><td>Hyperparameters Values</td></tr><tr><td colspan="2">Embedding dimension</td><td>{128, 256}</td></tr><tr><td colspan="2">Number of Encoder Layers</td><td>{2, 4}</td></tr><tr><td colspan="2">Type of attention operator</td><td>{self-attention, linear transform, span-based dynamic convolution}</td></tr><tr><td colspan="2">Number of operation heads</td><td>{2, 4}</td></tr><tr><td colspan="2">Hidden dimension</td><td>{512, 1024}</td></tr><tr><td colspan="2">Number of feed-forward stacks</td><td>{1, 3}</td></tr><tr><td rowspan="3">Attention operation</td><td>if self-attention</td><td>{scaled dot-product, multiplicative}</td></tr><tr><td>if linear transform</td><td>{discrete Fourier, discrete cosine}</td></tr><tr><td>if dynamic convolution</td><td>convolution kernel size: {5, 9}</td></tr></table>
371
+
372
+ All transformer architectures within the search space were trained on TPUv2s with 8 cores and 64 GB of memory using Google Colaboratory. The entire process of pretraining and finetuning the benchmark took approximately 25 TPU days. For the evaluation of training-free metrics, 2.8 GHz Intel Cascade Lake processors with either 16 or 32 cores and 32 GB of memory were employed.
373
+
374
+ In terms of hyperparameter settings, except for setting the training steps to 100,000 during the pre-training phase, everything else is the same as training ELECTRA-Small. Specifically, during the pre-training phase, the generator size multiplier is set to $1/4$ , the mask percentage is set to $15\%$ , the warmup step is 10,000, the learning rate is 5e-4, and the batch_size is 128. During the fine-tuning phase, the learning rate is 3e-4, the layerwise lr decay is 0.8, the warmup fraction is 0.1, the attention dropout is 0.1, and the batch_size is 32. For the RTE and STS tasks, 10 epochs are trained, while for other tasks, 3 epochs are trained. Both during pre-training and fine-tuning phases, the learning rate decay is linear, the vocabulary size is 30522, the dropout is 0.1, the weight decay value is 0.01, the $\epsilon$ value for the Adam optimizer is 1e-6, $\beta_{1}$ value is 0.9, and $\beta_{2}$ value is 0.999.
375
+
376
+ # C ABLATIONS OF RANKING EVALUATION
377
+
378
+ To examine the stability of zero-shot metrics, we conducted a series of studies to investigate the effects of random architecture initialization (Figure 5) and varying batch inputs (Figure 6) on the evaluation of zero-shot metrics within the FlexiBERT search space. The range of fluctuations observed in various zero-shot metrics in a neural network model demonstrates that, while the number of parameters remains unaffected, other zero-shot metrics exhibit varying degrees of fluctuation depending on the initialization of parameters and batch inputs. The figure indicates that PCA alone, serving as a proxy metric, shows good stability, and W-PCA exhibits the same level of stability as PCA due to multiplication by a constant parameter that does not alter the magnitude of the original
379
+
380
+ ![](images/81f2655c8c5eaae0ff3ec752320ed5318d988eb7debac99dd6036cff78bc0846.jpg)
381
+
382
+ ![](images/9556fe3ef869d5b34f2bfef49a8696f1a8aed0bd9573f61b396a748b901017fa.jpg)
383
+
384
+ ![](images/e337d0ac59948227bc09d7b6ab974bc759f53b224ec817bf8d8d094ee7b763d6.jpg)
385
+
386
+ ![](images/908848ecdad8e01ad16ef486772b9029ec1c789afdfe793f3a2ec4b73db52b6e.jpg)
387
+
388
+ ![](images/ac7ceb7bb9553717510a080c1545249a83fd7749a3acb16a366e4a7a1f69ec72.jpg)
389
+ Figure 5: Evaluation of zero-shot metrics with various initialization weights in the FlexiBERT search space. Ten architectures are randomly sampled from the search space, representing decile ranges of the GLUE score (e.g., $0 - 10\%$ , $10 - 20\%$ , ..., $90 - 100\%$ ). To ensure robustness, ten different random seeds are employed for weight initialization.
390
+
391
+ ![](images/abf9dc11fd0d43b2a2586a05ac78efdf08812867ddba979d95e94f7bfc31c16b.jpg)
392
+
393
+ ![](images/32ce79d5099dca6e5df0a8c54b3f73f5cbad88c52173043c59476d5a3c47f213.jpg)
394
+
395
+ ![](images/a3030b39af7797e9b56819c912e85a27f58b09ab5837b9ab56c15e3c83d19e41.jpg)
396
+
397
+ ![](images/c0fde6f17d7f2ea650fe30bc814338ed2bd882057c33d2282cbe36db4411d50c.jpg)
398
+
399
+ ![](images/cdfa1c8794671c9a5ed128f6352d68bf6b422a90a8f3a18b106998a145bb3bcd.jpg)
400
+
401
+ ![](images/953974e07d072e323894ef101c6c37df2730c05641f0dd9a88163d7f5a7bb81b.jpg)
402
+
403
+ ![](images/c29cadf859e2c8e29abb9412556c017a8fc5acaa8eead4a77f67401834a5caee.jpg)
404
+
405
+ ![](images/a000b6f858034ffb13a627aa4adc6c7fc932fc83542264337c181c0263dfd137.jpg)
406
+ Figure 6: Evaluation of zero-shot metrics with various minibatch inputs in the FlexiBERT search space. Ten architectures are randomly sampled from the search space, representing decile ranges of the GLUE score (e.g., $0 - 10\%$ , $10 - 20\%$ , ..., $90 - 100\%$ ). The same ten minibatches, each with a size of 128, are randomly chosen from the OpenWebText dataset for each architecture and metric.
407
+
408
+ ![](images/82d18698a30ca99353f50e7caaf0d0269b78d8aad564175c3c2a28a7ba5f69b4.jpg)
409
+
410
+ ![](images/d892b349797915be49999130b83d0821f3a1ad429a19ea233caec8fe3474f365.jpg)
411
+
412
+ ![](images/0a1aef5c50214c766fe8c1a7d1f1d35cd614db1848564d3cdd72a6aed30f9a66.jpg)
413
+
414
+ proxy metric's fluctuation. When compared to other zero-shot metrics, PCA and W-PCA exhibit slightly better stability than Activation Distance (Mellor et al., 2021a) and Head Softmax Confidence (Serianni & Kalita, 2023), and significantly better stability than Synaptic Diversity (Zhou et al., 2022), Synaptic Saliency (Abdelfattah et al., 2020), and Head Importance (Serianni & Kalita, 2023). These findings demonstrate that our proposed method not only achieves superior ranking evaluation but also demonstrates improved stability.
415
+
416
+ # D ROBUSTNESS WITH DIFFERENT $\eta$ 'S
417
+
418
+ Table 8: Principal component dimensions at different $\eta$ values for a neural network model composed of 12 identical blocks during training.
419
+
420
+ <table><tr><td>Model</td><td>η = 0.9</td><td>η = 0.99</td><td>η = 0.999</td><td>η = 1</td></tr><tr><td>BERT</td><td>91 (17.2%)</td><td>159 (30.1%)</td><td>286 (54.2%)</td><td>528</td></tr><tr><td>MobileBERT</td><td>58 (43.9%)</td><td>101 (76.5%)</td><td>125 (94.7%)</td><td>132</td></tr></table>
421
+
422
+ In our NLU task experiments, when training networks composed of 12 identical BERT or MobileBERT blocks, we observed an extremely uneven distribution of principal components. Specifically, the largest eigenvalues, when sorted, are predominantly concentrated at the beginning, while the remaining eigenvalues are close to zero. The dimensions required for each layer to achieve different $\eta$ val
423
+
424
+ values when training these models are shown in Table 8. From the table, it is evident that the BERT blocks exhibit a more uneven distribution compared to the MobileBERT blocks, as only $54.2\%$ of the dimensions are needed to reach a 0.999 principal component contribution rate. However, setting $\eta$ to 0.999 would result in a loss of distinction for MobileBERT blocks, as they would only have integer values between 125 and 132. Since our search space includes networks composed of both BERT and MobileBERT blocks, we compromised by setting $\eta$ to 0.99.
425
+
426
+ Table 9: Results obtained from varying $\eta$ values in rank correlation experiments.
427
+
428
+ <table><tr><td rowspan="2">Proxy</td><td colspan="2">η = 0.9</td><td colspan="2">η = 0.99</td><td colspan="2">η = 0.999</td></tr><tr><td>τ</td><td>ρ</td><td>τ</td><td>ρ</td><td>τ</td><td>ρ</td></tr><tr><td>Vanilla PCA</td><td>0.466</td><td>0.677</td><td>0.449</td><td>0.667</td><td>0.433</td><td>0.653</td></tr><tr><td>W-PCA</td><td>0.526</td><td>0.698</td><td>0.513</td><td>0.689</td><td>0.499</td><td>0.688</td></tr></table>
429
+
430
+ For the ranking correlation experiments, each block has more combination possibilities. We adjusted different $\eta$ values and obtained the following ranking correlations in Table 9. As shown in the table, both V-PCA and W-PCA maintain high rank correlation regardless of the $\eta$ value, demonstrating the robustness of the $\eta$ variable in the rank correlation experiments. Additionally, W-PCA consistently outperforms V
431
+
432
+ PCA in both Kendall's $\tau$ and Spearman's $\rho$ correlations across different $\eta$ values, further indicating that incorporating the number of parameters as a factor in the proxy enhances the stability of the rank correlation results.
433
+
434
+ # E IMPLEMENTATION DETAILS OF GENETIC ALGORITHM
435
+
436
+ The genetic algorithm is a widely used classic algorithm for solving combinatorial optimization problems. In Figure 7, we illustrate the flowchart of the genetic algorithm. Early NAS algorithms (Real et al., 2019; Sun et al., 2019) often encoded each individual neural network and then trained them from scratch. To efficiently identify the best combination of lightweight BERT blocks, we encoded and solved the combination using a genetic algorithm.
437
+
438
+ ![](images/f2fe8910b87dbc7cc7fec3138dec0eff8f4a2bf660768bee143ae4bc549129a8.jpg)
439
+
440
+ # E.1 ENCODE
441
+
442
+ When there are $m$ layers in the combination, we use an integer array of length $m$ to encode the selection method for each layer. Each integer in the array represents the type of block chosen for that layer. Taking the 12-layer model of W-PCA-Small as an example, if the number at a given position is less than or equal to 5, it indicates the selection of a BERT-type block; otherwise, a MobileBERT-type block is chosen. Each integer represents the dimension of the hidden dimension. Let $x$ be the integer at the current layer position, then the hidden dimension is calculated as $132 \times (x\% 6 + 1)$ .
443
+
444
+ Figure 7: Diagram of genetic algorithm. In order to effectively solve practical problems using a genetic algorithm, it is crucial to define a suitable method for encoding the solutions. Additionally, in each generation, the fitness of every individual should be calculated, and only the most excellent individuals should be selected for the application of crossover and mutation operators. These operators are used to generate the individuals that will make up the next generation.
445
+
446
+ # E.2 CROSSOVER
447
+
448
+ Algorithm 1: Crossover operation in genetic algorithm
449
+ Input: Parent 1 encoding $p1$ , Parent 2 encoding $p2$
450
+ Output: Offspring encoding
451
+ Function Crossover $(p1, p2)$ :
452
+ 1 Create an empty offspring encoding child
453
+ 2 for $i \gets 1$ to length $(p1)$ do
454
+ 3 if random number $< 0.5$ then
455
+ 4 Add the gene from the corresponding position in the parent 1 encoding to the offspring encoding
456
+ 5 child[i] $\leftarrow p1[i]$
457
+ 6 else
458
+ 7 Add the gene from the corresponding position in the parent 2 encoding to the offspring encoding
459
+ 8 child[i] $\leftarrow p2[i]$
460
+ 9 end
461
+ 10 end
462
+ 11 return child
463
+
464
+ We employ the single-point crossover method as outlined in Algorithm 1 to produce offspring encodings. In each generation, parents are randomly chosen from the top 10 individuals, and offspring are generated through crossover until the desired number of offspring is obtained.
465
+
466
+ # E.3 MUTATION
467
+
468
+ We set the mutation probability to 0.1. When the current position mutation is triggered, the current position will randomly mutate into another integer within the selection range.
469
+
470
+ # F KD LOSS FUNCTION
471
+
472
+ The distillation loss function of EfficientBERT (Dong et al., 2021) forms the basis of our approach. For the student model, we define $\mathcal{L}_{attn}^{i}$ as the loss for the multi-head attention (MHA) output and $\mathcal{L}_{hidd}^{i}$ as the loss for the feed-forward network (FFN) output in the $m$ -th layer. The embedding loss, represented by $\mathcal{L}_{embd}$ , is also included. These losses are calculated using the mean squared error (MSE) as follows:
473
+
474
+ $$
475
+ \left\{ \begin{array}{l} \mathcal {L} _ {a t t n} ^ {i} = \operatorname {M S E} \left(\mathbf {A} _ {i} ^ {S} \mathbf {W} _ {a}, \mathbf {A} _ {j} ^ {T}\right), \\ \mathcal {L} _ {h i d d} ^ {i} = \operatorname {M S E} \left(\mathbf {H} _ {i} ^ {S} \mathbf {W} _ {h}, \mathbf {H} _ {j} ^ {T}\right), \\ \mathcal {L} _ {e m b d} = \operatorname {M S E} \left(\mathbf {E} ^ {S} \mathbf {W} _ {e}, \mathbf {E} ^ {T}\right) \end{array} \right. \tag {9}
476
+ $$
477
+
478
+ Here, $\mathbf{A}_i^S$ and $\mathbf{H}_i^S$ represent the outputs of the MHA and FFN layers, respectively, in the $i$ -th layer of the student model. Similarly, $\mathbf{A}_j^T$ and $\mathbf{H}_j^T$ represent the outputs of the MHA and FFN layers, respectively, in the $j$ -th layer of the teacher model corresponding to the $i$ -th layer of the student model.
479
+
480
+ For our fixed teacher model, BERT-base, which comprises 12 layers, a one-to-one sequential correspondence exists between the layers of the student and teacher models when both models have 12 layers. However, in the case of a student model with only 6 layers, the correspondence remains one-to-one, but with a 2-layer interval. This implies that the first layer of the student model corresponds to the second layer of the teacher model, and so forth, until the sixth layer of the student model aligns with the twelfth layer of the teacher model.
481
+
482
+ The trainable matrices $\mathbf{W}_a$ , $\mathbf{W}_h$ , and $\mathbf{W}_e$ are used to adjust the dimensionality of the student and teacher models. Additionally, we define $\mathcal{L}_{pred}$ as the prediction loss, which is calculated using soft cross-entropy (CE):
483
+
484
+ $$
485
+ \mathcal {L} _ {p r e d} = \operatorname {C E} \left(\mathbf {z} ^ {S}, \mathbf {z} ^ {T}\right) \tag {10}
486
+ $$
487
+
488
+ ![](images/792b9a4bb7bc26cc18cd776117e110ba13d9533d17009e02b21adc67db284b84.jpg)
489
+ Figure 8: Visualizations of the searched architectures, where d represents the hidden dimensions.
490
+
491
+ Here, $\mathbf{z}$ represents the predicted logit vector.
492
+
493
+ The total loss is a combination of the above terms:
494
+
495
+ $$
496
+ \mathcal {L} = \sum_ {i = 1} ^ {m} \left(\mathcal {L} _ {\text {a t t n}} ^ {i} + \mathcal {L} _ {\text {h i d d}} ^ {i}\right) + \mathcal {L} _ {\text {e m b d}} + \gamma \mathcal {L} _ {\text {p r e d}} \tag {11}
497
+ $$
498
+
499
+ The coefficient $\gamma$ is used to control the contribution of the predicted loss. It is set at 0 during the pretraining phase and 1 during the fine-tuning phase.
500
+
501
+ # G VISUALIZATION OF ARCHITECTURES
502
+
503
+ Figure 8 illustrates the schematic diagram of the network structure. It is observed that all models preferentially choose MobileBERT as the candidate block, suggesting that MobileBERT is better suited for lightweight language models in comparison to BERT-base. Furthermore, with the exception of the searched model that solely relies on parameter count as the search evaluation metric, the candidate blocks of MobileBERT are predominantly located in the higher layers, indicating that this architecture is more adept at analyzing high-level semantic information.
504
+
505
+ # H MORE ABLATIONS ON ACCURACY COMPARISON
506
+
507
+ # H.1 DIFFERENT INITIALIZATIONS
508
+
509
+ Table 10: Statistical significance tests based on average scores from 3 runs with different random seed initializations on the GLUE dev set.
510
+
511
+ <table><tr><td>Proxy</td><td>#Params</td><td>V-PCA</td><td>W-PCA</td></tr><tr><td>GLUE</td><td>80.3±0.0</td><td>80.9±0.09</td><td>81.5±0.14</td></tr></table>
512
+
513
+ To validate the stability of each component of W-PCA, we conducted multiple runs using #Params, V-PCA, and W-PCA as proxies. The results are presented in Table 10. As shown in the table, under different weight initializations, the structures identified using #Params as a proxy remain identical across runs since #Params does not change. However, when V-PCA and W-PCA are used as proxies, the genetic algorithm identifies different structures, resulting in varying GLUE scores after training. On average, W-PCA demonstrates a clear advantage over V-PCA, and both W-PCA and V-PCA outperform #Params. This further validates the effectiveness of the proposed proxy.
514
+
515
+ Table 11: Performance comparison of larger-scale models on the GLUE test set.
516
+
517
+ <table><tr><td>Model</td><td>#Params</td><td>QNLI</td><td>MRPC</td><td>SST-2</td><td>CoLA</td><td>STS-B</td><td>MNLI-m/mm</td><td>RTE</td><td>QQP</td><td>AVG</td></tr><tr><td>TinyBERT-6 (Jiao et al., 2020)</td><td>67.0M</td><td>89.8</td><td>89.0</td><td>92.0</td><td>38.8</td><td>83.1</td><td>83.8/83.2</td><td>65.8</td><td>71.4</td><td>77.4</td></tr><tr><td>EfficientBERT (Dong et al., 2021)</td><td>70.1M</td><td>90.4</td><td>89.0</td><td>92.6</td><td>46.2</td><td>83.7</td><td>84.1/83.2</td><td>67.7</td><td>74.4</td><td>78.7</td></tr><tr><td>W-PCA-Large</td><td>66.9M</td><td>90.9</td><td>88.7</td><td>93.0</td><td>40.0</td><td>87.5</td><td>84.6/83.3</td><td>75.6</td><td>71.5</td><td>79.5</td></tr></table>
518
+
519
+ # H.2 COMPARISON OF LARGER-SIZED MODELS
520
+
521
+ In the main body of the paper, we primarily focus on lightweight language models with parameter sizes of approximately 15M and 10M. To investigate the applicability of W-PCA on larger language models in search processes, we carried out experiments by utilizing a larger model that expanded the size of the search space as described in Section 6.2.1. Specifically, we doubled the hidden_size and the hidden_dimention of $n$ candidate dimensions. Moreover, we increased the parameter limit in the genetic algorithm to 67M, resulting in the creation of our W-PCA-Large model. Our model, presented in Table 11, outperforms TinyBERT-6 (Jiao et al., 2020) and EfficientBERT (Dong et al., 2021) models of similar scale in terms of average GLUE score, despite having a slightly lower parameter count. This indicates that W-PCA also exhibits strong adaptability in larger search spaces.
522
+
523
+ # H.3 COMPARISON WITH ONE-SHOT NAS
524
+
525
+ Table 12: Comparison of search and training stages for one-shot and zero-shot methods.
526
+
527
+ <table><tr><td>Method</td><td colspan="3">Search Stage</td><td colspan="2">Training Stage</td></tr><tr><td>one-shot</td><td>Pretrain the supernet</td><td>Finetune the supernet for downstream tasks</td><td>Finding the optimal neural network structure using genetic algorithm</td><td rowspan="2">Re-pretrain the optimal network structure</td><td rowspan="2">Fine-tune for each downstream task</td></tr><tr><td>zero-shot</td><td colspan="3">Finding the optimal neural network structure using genetic algorithm</td></tr></table>
528
+
529
+ In Section 4, we refer to the composition method of the supernet in the one-shot NAS method SPOS (Guo et al., 2020), and construct the combination method of the lightweight BERT model in our zero-shot NAS search space. In order to compare its performance with the one-shot NAS, we now build a real supernet, as shown in Figure 4, with a total of $m$ layers and $2 \times n$ candidate blocks in each layer. Before searching for the optimal structure using a genetic algorithm, we performed one round of pre-training and fine-tuning based on the SPOS method. During each batch, a random path is selected from the $(2 \times n)^m$ combinations for forward propagation and backward parameter updates. After completing the pre-training and fine-tuning process, we proceeded with the workflow described in the zero-shot NAS experiment. This involved searching for the optimal network architecture on the supernet using a genetic algorithm, and then re-pretraining and fine-tuning this optimal architecture for downstream tasks.
530
+
531
+ Implementation Details. We first pretrain the supernet on English Wikipedia (Devlin et al., 2019) and BooksCorpus (Zhu et al., 2015), then utilize $90\%$ of the training set from each GLUE task for fine-tuning. We reserve the remaining $10\%$ of the MNLI task to evaluate the accuracy of architectures in the search. During the pre-training and fine-tuning process, the number of epochs is set to 10, and the batch_size is set to 256 for both. The learning rate for pre-training is set to 1e-4, and the learning rate for fine-tuning is set to 4e-4. The optimizer, weight decay, and learning rate adjustment strategy are the same as in the training section. The loss function used is still the MSE loss function described in Appendix F.
532
+
533
+ Results & Analysis. As displayed in Table 13, despite extensive computational resources devoted to the one-shot NAS search, the performance enhancement for various-sized models of W-PCA is not significant. To further investigate the reasons behind this, we outline the steps of one-shot NAS and zero-shot NAS in Table 12. Both approaches involve finding the optimal network structure through a search stage, followed by training this structure in the subsequent training stage. In the search stage of one-shot NAS, a supernet training is necessary, whereas zero-shot NAS only requires temporary construction of a neural network at each sampling step to compute the zero-shot proxy. Consequently, the memory overhead for zero-shot NAS is minimal. Contrasting with zero-shot NAS, one-shot NAS incurs additional time overhead due to pre-training the supernet on the corpus and performing global fine-tuning on downstream tasks. Additionally, one-shot NAS possesses extra pre-trained weights
534
+
535
+ Table 13: Comparison of zero-shot and one-shot methods on the GLUE test set in the same search space. "Time" also refers to the GPU time consumption in the NAS stage.
536
+
537
+ <table><tr><td>Model</td><td>Type</td><td>#Params</td><td>Time</td><td>QNLI</td><td>MRPC</td><td>SST-2</td><td>CoLA</td><td>STS-B</td><td>MNLI-m/mm</td><td>RTE</td><td>QQP</td><td>AVG</td></tr><tr><td rowspan="2">W-PCA-Tiny</td><td>zero-shot</td><td>9.6M</td><td>0.4 d</td><td>88.7</td><td>87.6</td><td>91.9</td><td>27.4</td><td>84.8</td><td>81.1/79.8</td><td>71.1</td><td>70.3</td><td>75.9</td></tr><tr><td>one-shot</td><td>9.7M</td><td>24 d</td><td>89.2</td><td>87.5</td><td>92.3</td><td>28.9</td><td>83.7</td><td>81.4/80.5</td><td>71.4</td><td>70.5</td><td>76.2</td></tr><tr><td rowspan="2">W-PCA-Small</td><td>zero-shot</td><td>15.6M</td><td>0.5 d</td><td>90.3</td><td>88.7</td><td>91.5</td><td>38.4</td><td>86.4</td><td>82.8/82.2</td><td>73.8</td><td>70.8</td><td>78.3</td></tr><tr><td>one-shot</td><td>15.6M</td><td>28 d</td><td>90.3</td><td>88.9</td><td>92.5</td><td>36.1</td><td>86.7</td><td>83.7/82.5</td><td>74.4</td><td>70.6</td><td>78.4</td></tr></table>
538
+
539
+ Table 14: Comparison of zero-shot NAS methods on the GLUE dev set within the EfficientBERT search space.
540
+
541
+ <table><tr><td>Proxy</td><td>#Params</td><td>Time</td><td>QNLI</td><td>MRPC</td><td>SST-2</td><td>CoLA</td><td>STS-B</td><td>MNLI-m</td><td>RTE</td><td>QQP</td><td>AVG</td></tr><tr><td>Synaptic Diversity (Zhou et al., 2022)</td><td>15.6M</td><td>1.0 d</td><td>84.1</td><td>85.2</td><td>86.1</td><td>21.1</td><td>76.5</td><td>77.2</td><td>66.5</td><td>78.2</td><td>71.9</td></tr><tr><td>Head Confidence (Serianni &amp; Kalita, 2023)</td><td>15.7M</td><td>0.8 d</td><td>83.1</td><td>84.4</td><td>86.8</td><td>21.8</td><td>80.9</td><td>76.3</td><td>66.1</td><td>84.6</td><td>73.0</td></tr><tr><td>Softmax Confidence (Serianni &amp; Kalita, 2023)</td><td>15.6M</td><td>0.8 d</td><td>84.0</td><td>83.0</td><td>87.3</td><td>21.2</td><td>78.7</td><td>76.7</td><td>68.3</td><td>77.6</td><td>72.1</td></tr><tr><td>W-PCA</td><td>15.6M</td><td>0.8 d</td><td>86.0</td><td>85.3</td><td>90.1</td><td>24.4</td><td>80.1</td><td>76.5</td><td>65.8</td><td>86.2</td><td>74.3</td></tr></table>
542
+
543
+ apart from the different searched network structures. After the network structure is searched by zero-shot NAS, no pre-training weights are used, and the weights are randomly initialized, resulting in relatively minor effects on accuracy. Therefore, zero-shot NAS remains the most cost-effective search solution.
544
+
545
+ # H.4 EXPERIMENTS ON EFFICIENTBERT SEARCH SPACE
546
+
547
+ Previous zero-shot NAS studies have primarily assessed performance on NAS Benchmark datasets, using rank correlation to evaluate the effectiveness of these zero-shot proxies. To facilitate a comparison with other zero-shot proxies applied to Transformer models in text tasks, we designed the search space described in Section 4 and compared our method within this context. To further ensure robustness, we conducted additional comparisons using the EfficientBERT search space, with the results shown in Table 14. The data demonstrate that our proposed zero-shot proxy maintains a significant advantage in accuracy across various search spaces.
548
+
549
+ # I CAUSAL LANGUAGE MODELING TASK
550
+
551
+ To validate our performance more sufficiently, we further transfer our method to causal language modeling (CLM) task.
552
+
553
+ # I.1 SETUP
554
+
555
+ Searching. We use some 1B~3B parameter OPT (Zhang et al., 2022) and LLaMA3 (Dubey et al., 2024) models as baseline models, keeping the number of layers unchanged while searching for the optimal combinations of MHA blocks and FFN dimensions in each layer of the models. For a fair comparison, we set the maximum parameter limit for the searched models to be the same as that of the baseline models.
556
+
557
+ Pretraining. We pretrain the models for causal language modeling (CLM) using a randomly selected $1\%$ of texts from the SlimPajama dataset (Soboleva et al., 2023). For this task, we train the models generated by each zero-shot proxy on 8 NVIDIA V100 GPUs, with a total batch size of 64 for 3 epochs. A cosine annealing schedule is applied to the learning rate, starting at $5 \times 10^{-4}$ . Optimization is performed using the AdamW optimizer with a weight decay of 0.1. To accommodate the differences in configurations, such as the tokenizer and training loss, between OPT and LLaMA3, we train two separate models for each proxy.
558
+
559
+ Instruction finetuning. To obtain models suitable for chat applications, we finetune each pretrained model using the instruction dataset databricks-dolly-15k (Conover et al., 2023) with the KD method MiniLLM (Gu et al., 2024). Specifically, we use OPT-13B and LLaMA3.2-11B as the teacher model to supervise other models. The training data and strategies employed are consistent with those of MiniLLM.
560
+
561
+ Table 15: Evaluation results referring to the average GPT-4 feedback scores (GPT4) and Rouge-L scores (R-L) obtained from 5 runs.
562
+
563
+ <table><tr><td rowspan="2">Teacher</td><td rowspan="2">Proxy</td><td rowspan="2">#Params</td><td colspan="2">DollyEval</td><td colspan="2">SelfInst</td><td colspan="2">VicunaEval</td><td colspan="2">S-NI</td></tr><tr><td>GPT4</td><td>R-L</td><td>GPT4</td><td>R-L</td><td>GPT4</td><td>R-L</td><td>R-L</td><td>R-L</td></tr><tr><td rowspan="5">OPT-13B</td><td>OPT-1.3B (baseline)</td><td>-</td><td>60.7</td><td>26.7</td><td>47.0</td><td>14.8</td><td>50.6</td><td>17.9</td><td>28.6</td><td>33.4</td></tr><tr><td>Synaptic Diversity (Zhou et al., 2022)</td><td>1.3B</td><td>60.4</td><td>26.9</td><td>47.3</td><td>15.2</td><td>51.8</td><td>17.6</td><td>29.1</td><td>34.2</td></tr><tr><td>Head Confidence (Serianni &amp; Kalita, 2023)</td><td>1.3B</td><td>61.5</td><td>27.3</td><td>48.0</td><td>15.3</td><td>51.5</td><td>18.1</td><td>29.5</td><td>34.3</td></tr><tr><td>Softmax Confidence (Serianni &amp; Kalita, 2023)</td><td>1.3B</td><td>61.2</td><td>27.1</td><td>47.5</td><td>14.6</td><td>50.8</td><td>18.2</td><td>28.9</td><td>33.9</td></tr><tr><td>W-PCA</td><td>1.3B</td><td>62.0</td><td>27.6</td><td>47.9</td><td>16.1</td><td>52.3</td><td>19.0</td><td>29.2</td><td>35.0</td></tr><tr><td rowspan="5">OPT-13B</td><td>OPT-2.7B (baseline)</td><td>-</td><td>63.2</td><td>27.4</td><td>52.7</td><td>17.2</td><td>55.9</td><td>19.1</td><td>30.7</td><td>35.1</td></tr><tr><td>Synaptic Diversity (Zhou et al., 2022)</td><td>2.7B</td><td>63.8</td><td>27.5</td><td>52.9</td><td>17.8</td><td>56.2</td><td>18.9</td><td>31.5</td><td>35.8</td></tr><tr><td>Head Confidence (Serianni &amp; Kalita, 2023)</td><td>2.7B</td><td>64.0</td><td>27.8</td><td>53.2</td><td>17.5</td><td>56.5</td><td>19.4</td><td>31.6</td><td>35.9</td></tr><tr><td>Softmax Confidence (Serianni &amp; Kalita, 2023)</td><td>2.7B</td><td>63.2</td><td>27.8</td><td>53.0</td><td>17.3</td><td>56.1</td><td>19.2</td><td>31.0</td><td>35.4</td></tr><tr><td>W-PCA</td><td>2.7B</td><td>64.5</td><td>28.3</td><td>53.9</td><td>18.1</td><td>57.2</td><td>20.1</td><td>32.1</td><td>36.2</td></tr><tr><td rowspan="5">LLaMA3.2-11B</td><td>LLaMA3.2-3B (baseline)</td><td>-</td><td>74.0</td><td>29.8</td><td>69.1</td><td>23.6</td><td>64.6</td><td>25.2</td><td>36.2</td><td>43.9</td></tr><tr><td>Synaptic Diversity (Zhou et al., 2022)</td><td>3B</td><td>73.8</td><td>30.5</td><td>70.1</td><td>23.8</td><td>65.0</td><td>25.4</td><td>36.7</td><td>44.8</td></tr><tr><td>Head Confidence (Serianni &amp; Kalita, 2023)</td><td>3B</td><td>74.4</td><td>30.9</td><td>71.4</td><td>24.2</td><td>65.2</td><td>26.2</td><td>37.1</td><td>45.6</td></tr><tr><td>Softmax Confidence (Serianni &amp; Kalita, 2023)</td><td>3B</td><td>74.1</td><td>30.1</td><td>70.4</td><td>23.6</td><td>64.9</td><td>25.9</td><td>37.2</td><td>44.3</td></tr><tr><td>W-PCA</td><td>3B</td><td>75.1</td><td>31.4</td><td>71.8</td><td>24.7</td><td>65.9</td><td>26.8</td><td>37.6</td><td>45.2</td></tr></table>
564
+
565
+ Each model required approximately 20 GPU days for pretraining and around 28 GPU hours for finetuning in our experiments.
566
+
567
+ Quantitative evaluation. We perform a numerical evaluation on five instruction-following datasets:
568
+
569
+ - DollyEval: This is a 500-sample test set that we extracted from the databricks-dolly-15k dataset.
570
+ - SelfInst (Wang et al., 2022a): A user-oriented instruction-following set comprising 252 samples.
571
+ - VicunaEval (Chiang et al., 2023): The evaluation includes 80 challenging questions used in the Vicuna project.
572
+ - S-NI: The test set of Super-NaturalInstructions (Wang et al., 2022b), which consists of 9,000 samples across 119 tasks.
573
+ - UnNI: The core set of UnnaturalInstructions (Honovich et al., 2022), which contains 60,000 samples.
574
+
575
+ # I.2 QUANTITATIVE RESULTS ON LANGUAGE BENCHMARKS
576
+
577
+ We evaluated the performance of the W-PCA model by comparing it with other models trained using MiniLLM. As shown in Table 15, across various model sizes, W-PCA consistently identifies architectures that outperform the baseline in all evaluation metrics. Compared to other zero-shot NAS methods, W-PCA leads in 6 out of 8 metrics for the OPT-1.3B size, all metrics for the OPT-2.7B size, and 7 out of 8 metrics for the LLaMA3.2-3B size. These results demonstrate the effectiveness of our method in transferring performance to CLM tasks.
2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42e64b8203988a5ebe7ae8b9d686e76f57e105efcbb51d9eea0803188a51a8bf
3
+ size 1093780
2025/W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/40bacbcf-2bbe-4071-992e-5b12d4460b40_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/40bacbcf-2bbe-4071-992e-5b12d4460b40_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/40bacbcf-2bbe-4071-992e-5b12d4460b40_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:335b2bffd6ec3cfc442eee06e1dbd6c5c152e0afbd23b8779af7ca3fb81638e8
3
+ size 2167730
2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74c3390b5a8da88a4ba3cfb6d1e1ef30fefeaef5c630bdb78afe41865b0ec4b0
3
+ size 476203
2025/Ward_ Provable RAG Dataset Inference via LLM Watermarks/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/af6e964d-520b-4b92-a1b8-dbbc50874af3_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/af6e964d-520b-4b92-a1b8-dbbc50874af3_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/af6e964d-520b-4b92-a1b8-dbbc50874af3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81781e97375a1e6088b4fd4b448ead4532e80ea2f836d359538addb358da3c21
3
+ size 1725604
2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2c020cab29790a4fcde77675f28854d09f30325d5f050e9fdbb417bf7cd4441
3
+ size 1235721
2025/WardropNet_ Traffic Flow Predictions via Equilibrium-Augmented Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/eec3886d-db73-41c8-8176-f1d6479fe735_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/eec3886d-db73-41c8-8176-f1d6479fe735_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/eec3886d-db73-41c8-8176-f1d6479fe735_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2eac5e26ece15cf35b0f85919e3e4356f6b526d9f9bc351b892a86fcbb253230
3
+ size 6034167
2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/full.md ADDED
@@ -0,0 +1,479 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WARM DIFFUSION: RECIPE FOR BLUR-NOISE MIXTURE DIFFUSION MODELS
2
+
3
+ Hao-Chien Hsueh Wen-Hsiao Peng Ching-Chun Huang
4
+
5
+ National Yang Ming Chiao Tung University, Taiwan
6
+
7
+ # ABSTRACT
8
+
9
+ Diffusion probabilistic models have achieved remarkable success in generative tasks across diverse data types. While recent studies have explored alternative degradation processes beyond Gaussian noise, this paper bridges two key diffusion paradigms: hot diffusion, which relies entirely on noise, and cold diffusion, which uses only blurring without noise. We argue that hot diffusion fails to exploit the strong correlation between high-frequency image detail and low-frequency structures, leading to random behaviors in the early steps of generation. Conversely, while cold diffusion leverages image correlations for prediction, it neglects the role of noise (randomness) in shaping the data manifold, resulting in out-of-manifold issues and partially explaining its performance drop. To integrate both strengths, we propose Warm Diffusion, a unified Blur-Noise Mixture Diffusion Model (BNMD), to control blurring and noise jointly. Our divide-and-conquer strategy exploits the spectral dependency in images, simplifying score model estimation by disentangling the denoising and deblurring processes. We further analyze the Blur-to-Noise Ratio (BNR) using spectral analysis to investigate the trade-off between model learning dynamics and changes in the data manifold. Extensive experiments across benchmarks validate the effectiveness of our approach for image generation.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021; Song et al., 2022) have gained significant attention for their ability to learn data distributions through denoising, leading to impressive generation quality. These generative models typically employ a stochastic process that gradually transforms complex data distributions into simpler forms by adding a small amount of Gaussian noise in each forward iteration, eventually arriving at a simple Gaussian distribution. The reverse process involves using a neural network to model the score (Hyvärinen & Dayan, 2005) of a noise-level-dependent marginal distribution, iteratively adapting the denoised samples to recover the input data distribution. However, the process of learning this score estimator is domain-agnostic, focusing solely on recovering the underlying signal by removing Gaussian noise without considering the inherent properties of the modeled data. While this universal approach is effective for various data modalities, we argue that it leaves room for improvement in modeling images. Specifically, it overlooks the strong correlation between high-frequency image detail and low-frequency structures—a relationship we term spectral dependency. This correlation suggests that an efficient image generation process should progress from common low-frequency components to diverse high-frequency detail.
14
+
15
+ Recently, a large number of studies (Bansal et al., 2022; Daras et al., 2022; Rissanen et al., 2023; Hoogeboom & Salimans, 2024; Luo et al., 2023; Delbracio & Milanfar, 2024; Liu et al., 2023; Yue et al., 2023; Liu et al., 2024) explored various alternatives to the conventional noise-driven forward/reverse process of diffusion models, with the aim of accelerating the reverse process and solving specific inverse or image translation problems. Some of these techniques adopt a cold diffusion process, which replaces the stochastic Gaussian degradation process with deterministic image transformations, e.g. blurring. These advancements underscore the evolving nature of diffusion probabilistic models in formulating the forward/reverse process. Although these methods work well in solving specific restoration tasks, most of them struggle with generating diverse and high-quality samples, as compared to typical noise-driven hot diffusion models.
16
+
17
+ ![](images/dd37848b7d1ab6a6e9f4a47b015d9c0b4cf3694f69da60ac23f38fd96e5dc87b.jpg)
18
+ Figure 1: Illustration of Warm Diffusion, the proposed two-pronged diffusion process. (a) Graphical models of the proposed blur-noise mixture diffusion processes, offering flexibility in selecting blur and noise levels, thereby enabling a smooth transition between (1) Hot Diffusion and (4) Cold Diffusion. (b) The proposed divide-and-conquer strategy employs a joint model for denoising and deblurring. It recovers the blurry signal obscured by noise and restores missing high-frequency detail while explicitly accounting for the spectral dependency of images. (c) Data manifolds for the two diffusion processes at varying blur-to-noise ratios (BNR). The red and blue lines represent the means of Gaussian distributions derived from samples sharing low-frequency content but differing in high-frequency detail. As BNR increases, the data manifolds shift and merge at earlier stages, as blurring filters out high-frequency detail, leaving only the shared low-frequency signal. This shift may lead to out-of-manifold issues, discussed in detail in Sec. 4.3.
19
+
20
+ As shown in Fig. 1, we revisit the design of diffusion probabilistic models, expanding the pure denoising process to a joint denoising and deblurring approach. We discuss the limitations of existing hot and cold diffusion methods, particularly in terms of network learning and data manifold modeling. By taking into account the spectral dependency of images and the diversity introduced by Gaussian noise, we propose a method that balances blurring and noise levels. This enhances the learning process of the decoding neural network, maintains the diversity of the data manifold, and ultimately improves the quality of generated images. Our approach, targeting image generation, has the following contributions:
21
+
22
+ - We propose a warm diffusion process that combines blurring and noise in the forward process. The scheme allows flexible control over blur and noise levels, enabling joint deblurring and denoising to enhance image generation quality.
23
+ - We introduce the new concept of Blur-to-Noise Ratio (BNR) control and show that increasing the BNR (leaning toward cold diffusion) simplifies model learning by leveraging spectral dependency effectively. However, this also increases the risk of samples deviating from the data manifold during the reverse process.
24
+ - We select the BNR by analyzing the spectra of images and Gaussian white noise. The difference in their power spectral densities guides us to find a suitable BNR that balances two key factors: preserving the integrity of the data manifold and simplifying neural network learning.
25
+ - Extensive experiments across various datasets show that our approach outperforms state-of-the-art diffusion methods in terms of image generation quality.
26
+
27
+ # 2 RELATED WORKS
28
+
29
+ # 2.1 DIFFUSION PROBABILISTIC MODELS FOR IMAGE GENERATION
30
+
31
+ Diffusion models (Ho et al., 2020) consist of two key processes: a forward process that progressively transforms the data distribution into a Gaussian distribution by adding noise and a reverse process
32
+
33
+ that learns to denoise and recover the original data distribution. Various noise scheduling strategies have been explored across different studies. IDDPM (Nichol & Dhariwal, 2021) introduced a cosine noise schedule that gradually destroys the signal at a slower rate. Score-SDE (Song et al., 2021) provided a unified framework that represents models like SMLD (Song & Ermon, 2019) and DDPM (Ho et al., 2020) within a continuous state space, using different discretizations and distinct Stochastic Differential Equations (SDEs). EDM (Karras et al., 2022) further elucidates the design space of diffusion models. Extensive discussions within this work provide detailed analyses of training and sampling strategies, including preconditioning the network's input and output and adjusting the loss function. These enhancements, which consider neural network properties by maintaining unit variance for the model's input and output and mitigating large gradient variations, have led to significantly improved results. However, these improvements primarily address challenges posed by Gaussian noise without leveraging key image properties, such as spectral dependency, and they overlook the potential of alternative corruption processes.
34
+
35
+ # 2.2 DIFFUSION-LIKE MODELS WITH VARIATIONS OF THE CORRUCTION PROCESS
36
+
37
+ Recently, researchers have explored modifications to the corruption process within diffusion models. Cold Diffusion (Bansal et al., 2022) replaced traditional noise-based transition functions with transformations such as blur, masking, and pixelation. This innovation created a diffusion-like generative model by inverting arbitrary image transforms; however, results indicate that cold diffusion struggles to maintain high-quality outcomes. Concurrently, IHDM (Rissanen et al., 2023) introduced a progressive blurring process, where the model learns to iteratively restore blurred signals, essentially acting as the "inverse" of heat dissipation. Blurring Diffusion (Hoogeboom & Salimans, 2024) established a connection between IHDM and Gaussian diffusion (Ho et al., 2020), demonstrating that IHDM can be interpreted as a form of Gaussian diffusion in the frequency domain, albeit with different schedules across frequency bands. By integrating the blur and noise schedules from both IHDM and iDDPM (Nichol & Dhariwal, 2021), Blurring Diffusion achieved better generation quality compared to IHDM, though at the cost of more training iterations. However, these aforementioned approaches primarily focus on the mean transition function and lack a global perspective that jointly considers the role of Gaussian noise within the diffusion process, which contributes to a drop in performance.
38
+
39
+ # 3 PRELIMINARY: DENOISING DIFFUSION IMPLICIT MODELS
40
+
41
+ Considering a class of inference distributions, indexed by vector $\sigma$ :
42
+
43
+ $$
44
+ q _ {\sigma} \left(\boldsymbol {x} _ {1: T} \mid \boldsymbol {x} _ {0}\right) := q _ {\sigma} \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {0}\right) \prod_ {t = 2} ^ {T} q _ {\sigma} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {t}, \boldsymbol {x} _ {0}\right), \tag {1}
45
+ $$
46
+
47
+ where given $q_{\sigma}(\pmb{x}_T|\pmb{x}_0) = \mathcal{N}(\sqrt{\alpha_T}\pmb{x}_0, (1 - \alpha_T)\pmb{I})$ , there exists a family of posterior distributions for all $t > 1$ :
48
+
49
+ $$
50
+ q _ {\sigma} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {t}, \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\sqrt {\alpha_ {t - 1}} \boldsymbol {x} _ {0} + \sqrt {1 - \alpha_ {t - 1} - \sigma_ {t} ^ {2}} \cdot \frac {\boldsymbol {x} _ {t} - \sqrt {\alpha_ {t}} \boldsymbol {x} _ {0}}{\sqrt {1 - \alpha_ {t}}}, \sigma_ {t} ^ {2} \boldsymbol {I}\right), \tag {2}
51
+ $$
52
+
53
+ such that $q_{\sigma}(\boldsymbol{x}_t|\boldsymbol{x}_0) = \mathcal{N}(\sqrt{\alpha_t}\boldsymbol{x}_0, (1 - \alpha_t)\boldsymbol{I})$ for all $t$ , which matches the marginals as DDPM (Ho et al., 2020).
54
+
55
+ The trainable generative process is defined as Markovian where each $p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_t)$ aims to utilize the knowledge of $q_{\sigma}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_t, \boldsymbol{x}_0)$ . In a sense, given a noisy observation $\boldsymbol{x}_t$ , we first predict a denoised observation $D_{\theta}^{(t)}(\boldsymbol{x}_t)$ , and then obtain $\boldsymbol{x}_{t-1}$ with $q_{\sigma}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_t, D_{\theta}^{(t)}(\boldsymbol{x}_t))$ . The learning objective can be therefore parameterized as:
56
+
57
+ $$
58
+ L \left(D _ {\theta} ^ {t}\right) = \lambda (t) \| D _ {\theta} ^ {t} \left(\boldsymbol {x} _ {t}\right) - \boldsymbol {x} _ {0} \| _ {2} ^ {2}. \tag {3}
59
+ $$
60
+
61
+ # 4 BLUR-NOISE MIXTURE DIFFUSION MODELS (BNMD)
62
+
63
+ To enhance image generation, we introduce a blur-noise mixture diffusion model (BNMD). BNMD extends the state space of the diffusion process to two dimensions, controlled by the corruption
64
+
65
+ ![](images/457bcdccd34120ec4f9913c5924d3136ccff7c4b4f6038649bce33e6d484edd3.jpg)
66
+ Figure 2: Workflow of the proposed diffusion process. The forward process progressively applies blurring and noise, controlled by the Blur-to-Noise Ratio (BNR), to degrade the sample from high quality to low quality. During this phase, training pairs are collected to train the prediction model (e.g., U-Net) for use in the reverse process. For sample generation, the reverse process works as follows: (a) The prediction model simultaneously performs denoising and deblurring. (b) With the prediction results, the reverse step transitions the sample from step $t$ to $t - 1$ . Specifically, the denoiser gradually guides the sample toward a blurry prediction, while the deblurring prediction helps return the sample to a higher-quality state.
67
+
68
+ factors $\alpha$ and $\beta$ , which represent the blur and noise levels, respectively. In the forward process, the schedules of $\alpha$ and $\beta$ determine a blend of blurring and noising operations. Consequently, the reverse process iteratively recovers a high-quality image by deblurring and denoising a sampled image drawn from a prior distribution (i.e., a standard normal distribution). As shown in Fig. 2, mixing deblurring and denoising throughout the iterative image generation process distinguishes our approach from most existing diffusion models, which typically generate images through denoising alone. Moreover, our forward and generative processes are defined similarly to DDIM (Song et al., 2022), but with a focus on incorporating blur-noise operations into both stages. For brevity, we primarily address the key terms from DDIM (Song et al., 2022) that require adaptation for our model.
69
+
70
+ # 4.1 BLUR-NOISE FORWARD DIFFUSION PROCESSES
71
+
72
+ Our blur-noise forward process has the marginal distributions for $t \in \{1, \dots, T\}$ as:
73
+
74
+ $$
75
+ q \left(\boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}, \beta_ {t} ^ {2} \boldsymbol {I}\right), \tag {4}
76
+ $$
77
+
78
+ where $\mathbf{V}^T$ and $\mathbf{V}$ denote the forward and inverse Discrete Cosine Transform (DCT), respectively. $M_{\alpha_t}$ , a diagonal matrix, specifies the Gaussian blurring mask in the DCT domain, which varies according to the Gaussian blur level $\alpha_t$ . The parameter $\beta_t$ controls the level of Gaussian noise. The corruption sequences $\alpha_1, \ldots, \alpha_T$ and $\beta_1, \ldots, \beta_T$ are defined as monotonically increasing sequences. As with DDIM (Song et al., 2022), Eq. (4) requires the inference transition distributions (i.e., those used in the reverse process) $q_{\sigma}(.)$ for all $t > 1$ (see Appendix A.1) to be:
79
+
80
+ $$
81
+ q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {t - 1}, \beta_ {t - 1}} \mid \boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}}, \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t - 1}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0} + \sqrt {\beta_ {t - 1} ^ {2} - \sigma_ {t} ^ {2}} \cdot \frac {\boldsymbol {x} _ {\alpha_ {t} , \beta_ {t}} - \boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}}{\beta_ {t}}, \sigma_ {t} ^ {2} \boldsymbol {I}\right). \tag {5}
82
+ $$
83
+
84
+ # 4.2 GENERATIVE PROCESS IN A DIVIDE-AND-CONQUER MANNER
85
+
86
+ Our generative process is a Markovian process specified by learnable transition distributions $p_{\theta}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}})$ for $t > 1$ . The training of these distributions involves maximizing a variational lower bound that requires minimizing among others the sum of the KL-divergence terms $KL(q_{\sigma}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}}, \pmb{x}_{0}) || p_{\theta}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}})$ for $t > 1$ .
87
+
88
+ Notably, parameterizing $p_{\theta}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}})$ is to leverage the knowledge about $q_{\sigma}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}}, \pmb{x}_{0})$ and learn a mean function to predict the mean in Eq. (5) based on the blurry-noisy observation $\pmb{x}_{\alpha_{t}, \beta_{t}}$ . At the inference time, we do not have access to the input $\pmb{x}_{0}$ ; as such, one straightforward approach to parameterizing $p_{\theta}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}})$ is to learn a network $\hat{\pmb{x}}_{0} = F_{\theta}(\pmb{x}_{\alpha_{t}, \beta_{t}}, t)$ that makes a prediction of $\pmb{x}_{0}$ from $\pmb{x}_{\alpha_{t}, \beta_{t}}$ , and have $p_{\theta}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}}, \hat{\pmb{x}}_{0}) = q_{\sigma}(\pmb{x}_{\alpha_{t-1}, \beta_{t-1}} | \pmb{x}_{\alpha_{t}, \beta_{t}}, \hat{\pmb{x}}_{0})$ .
89
+
90
+ Instead of predicting $\pmb{x}_0$ from $\pmb{x}_{\alpha_t,\beta_t}$ directly with a single network, this work introduces a novel divide-and-conquer approach to parameterizing $p_{\theta}(x_{\alpha_{t - 1},\beta_{t - 1}}|x_{\alpha_t,\beta_t})$ . This is motivated by the observation that the mean in Eq. (5) can be expressed alternatively as:
91
+
92
+ $$
93
+ \boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}} + \underbrace {\boldsymbol {V} \left(\boldsymbol {M} _ {\alpha_ {t - 1}} - \boldsymbol {M} _ {\alpha_ {t}}\right) \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}} _ {\text {a d d m i s s i n g h i g h - f r e q u e n c y d e t a i l}} + \left(\beta_ {t} - \sqrt {\beta_ {t - 1} ^ {2} - \sigma_ {t} ^ {2}}\right) \cdot \underbrace {\frac {\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0} - \boldsymbol {x} _ {\alpha_ {t} , \beta_ {t}}}{\beta_ {t}}} _ {\text {d i r e c t i o n p o i n t i n g t o b l u r r y} \boldsymbol {x} _ {0}}, \tag {6}
94
+ $$
95
+
96
+ where we have additionally added and subtracted the same term $\pmb{x}_{\alpha_{t},\beta_{t}} - \pmb{V}M_{\alpha_{t}}\pmb{V}^{T}\pmb{x}_{0}$ . This alternative expression suggests that the task of parameterizing $p_{\theta}(\pmb{x}_{\alpha_{t-1},\beta_{t-1}}|\pmb{x}_{\alpha_{t},\beta_{t}})$ by learning a mean function to predict that of Eq. (5) can be decomposed into two sub-tasks: deblurring and denoising. The former aims to reconstruct the high-frequency detail of $\pmb{x}_{0}$ via the iterative generation process, while the latter is to recover a blurry version (i.e., $\pmb{V}\pmb{M}_{\alpha_{t}}\pmb{V}^{T}\pmb{x}_{0}$ ) of $\pmb{x}_{0}$ from its noisy observation (i.e., $\pmb{x}_{\alpha_{t},\beta_{t}}$ ) by focusing on the reconstruction of the low-frequency components of $\pmb{x}_{0}$ . This interpretation allows us to learn two specialized networks for addressing these separate sub-tasks, leading to the more efficient and accurate generation of output images.
97
+
98
+ Specifically, we learn a network $D_{\theta}$ that takes the blurry-noisy observation $\pmb{x}_{\alpha_t,\beta_t} = \pmb{V}\pmb{M}_{\alpha_t}\pmb{V}^T\pmb{x}_0 + \beta_t\pmb{\epsilon},\pmb{\epsilon} \sim \mathcal{N}(\pmb{0},\pmb{I})$ as input to predict the noise-free yet blurry representation $\pmb{V}\pmb{M}_{\alpha_t}\pmb{V}^T\pmb{x}_0$ of $\pmb{x}_0$ . When learned successfully, $D_{\theta}$ is able to denoise $\pmb{x}_{\alpha_t,\beta_t}$ . For deblurring, a separate network $R_{\theta}$ , which takes the same $\pmb{x}_{\alpha_t,\beta_t}$ as input, is learned to update $\pmb{V}\pmb{M}_{\alpha_t}\pmb{V}^T\pmb{x}_0$ as $\pmb{x}_0$ . That is, $R_{\theta}$ aims to recover the missing high-frequency detail in $\pmb{V}\pmb{M}_{\alpha_t}\pmb{V}^T\pmb{x}_0$ , in order to reconstruct $\pmb{x}_0$ . In symbols, $R_{\theta}$ is meant to predict $\pmb{x}_{res_t} = \pmb{x}_0 - \pmb{V}\pmb{M}_{\alpha_t}\pmb{V}^T\pmb{x}_0 = \pmb{V}(\pmb{I} - \pmb{M}_{\alpha_t})\pmb{V}^T\pmb{x}_0$ . With our proposed parameterization, and given that $\pmb{M}_{\alpha_t}$ is a diagonal matrix, the mean function of $p_{\theta}(\pmb{x}_{\alpha_{t-1},\beta_{t-1}}|\pmb{x}_{\alpha_t,\beta_t})$ is:
99
+
100
+ $$
101
+ \begin{array}{l} \boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}} + \boldsymbol {V} \left(\boldsymbol {M} _ {\alpha_ {t - 1}} - \boldsymbol {M} _ {\alpha_ {t}}\right) \left(\boldsymbol {I} - \boldsymbol {M} _ {\alpha_ {t}}\right) ^ {- 1} \boldsymbol {V} ^ {T} \boldsymbol {R} _ {\theta} \left(\boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}}, \alpha_ {t}, \beta_ {t}\right) \\ + \left(\beta_ {t} - \sqrt {\beta_ {t - 1} ^ {2} - \sigma_ {t} ^ {2}}\right) \cdot \frac {D _ {\theta} \left(\boldsymbol {x} _ {\alpha_ {t} , \beta_ {t}} , \alpha_ {t} , \beta_ {t}\right) - \boldsymbol {x} _ {\alpha_ {t} , \beta_ {t}}}{\beta_ {t}}. \tag {7} \\ \end{array}
102
+ $$
103
+
104
+ Considering the equation $\pmb{x}_{\text{res}_t} = \pmb{V}(\pmb{I} - \pmb{M}_{\alpha_t})\pmb{V}^T\pmb{x}_0$ , the second term $\pmb{V}(M_{\alpha_{t-1}} - M_{\alpha_t})\pmb{V}^T\pmb{x}_0$ in Eq. (6) can be rewritten as $\pmb{V}(M_{\alpha_{t-1}} - M_{\alpha_t})(\pmb{I} - M_{\alpha_t})^{-1}\pmb{V}^T\pmb{x}_{\text{res}_t}$ to match the form of its counterpart in Eq. (7). To approximate Eq. (6) using Eq. (7), we then train $D_\theta$ and $R_\theta$ by minimizing the following mean square error (MSE) losses
105
+
106
+ $$
107
+ L \left(D _ {\theta}\right) = \left\| D _ {\theta} \left(\boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}}, \alpha_ {t}, \beta_ {t}\right) - \left(\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}\right) \right\| _ {2} ^ {2}, a n d \tag {8}
108
+ $$
109
+
110
+ $$
111
+ L \left(R _ {\theta}\right) = \left\| R _ {\theta} \left(\boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}}, \alpha_ {t}, \beta_ {t}\right) - \boldsymbol {x} _ {\text {r e s} _ {t}} \right\| _ {2} ^ {2}. \tag {9}
112
+ $$
113
+
114
+ Here, given a noisy and blurry observation $\pmb{x}_{\alpha_t,\beta_t} = \pmb{V}\pmb{M}_{\alpha_t}\pmb{V}^T\pmb{x}_0 + \beta_t\pmb{\epsilon}$ at time $t$ , along with the corresponding corruption factors, $\alpha_{t}$ and $\beta_{t}$ , $D_{\theta}$ (Denoiser) is trained to recover the pure blurry image by removing the noise component, while $R_{\theta}$ (Deblurrer) extracts the high-frequency detail $\pmb{x}_{res_t}$ . Finally, at time $t$ , the prediction of the clean image is given by $\hat{\pmb{x}}_0 = F_\theta = D_\theta +R_\theta$
115
+
116
+ # 4.3 THE IMPACT OF BLUR-TO-NOISE RATIO ON MODEL BEHAVIOR AND DATA MANIFOLD
117
+
118
+ We have demonstrated how to combine two degradation factors—blurring and noising—into a unified diffusion process and introduced training objectives that simultaneously address deblurring and
119
+
120
+ ![](images/98ab140694790601f70545260246087d39dc9c3f0bf992d254909cf7435e62ad.jpg)
121
+ Figure 3: Impact of Varying BNRs on Model Behavior. We illustrate the observed signal, denoising target, and deblurring target, along with their respective signal spectrum analyses, across different BNR values. From left to right, the noise level remains constant while the BNR value increases. As the BNR rises, the denoising task (red arrow) becomes progressively easier, shifting more responsibility to the deblurring task (blue arrow) and effectively utilizing the spectral dependency of images. In contrast, when $\mathrm{BNR} = 0$ , the model requires a stronger denoiser to directly generate the image, without leveraging the spectral dependency assistance from the deblurrer.
122
+
123
+ denoising for image generation. However, the relationship between the blur and noise levels remains unclear. To explore this connection, we define a factor called the Blur-to-Noise Ratio (BNR):
124
+
125
+ $$
126
+ B N R = \frac {\text {B l u r L e v e l}}{\text {N o i s e L e v e l}} = \frac {\alpha}{\beta}, \tag {10}
127
+ $$
128
+
129
+ which represents the ratio of the blur level to the noise level. As illustrated in Fig. 1, increasing the BNR value from 0 to $\infty$ transitions the diffusion path from hot to cold diffusion. To further investigate the model behavior with varying BNR values, we examine the learning objectives introduced in Sec. 4.2, which consist of two distinct branches targeting denoising and deblurring, respectively. Fig. 3 shows the signal spectra of images and the corresponding training objectives for different BNR values. With a constant noise level, an increase in BNR would raise the blur level, thereby simplifying the denoising task. This shift occurs because the denoiser no longer needs to restore high-frequency detail, transferring that responsibility to the deblurring task. Additionally, by effectively utilizing spectral dependency, the deblurrer can efficiently learn a mapping function from the low-frequency observation to its high-frequency counterpart.
130
+
131
+ In addition to affecting model behavior, varying BNR values also lead to shifts in data manifolds during forward iterations. To explore this in greater detail, we compare the data manifolds under low and high BNR scenarios with the same blur level in Fig. 4. At each forward or reverse step, low BNR cases exhibit a larger noise-covering space due to higher randomness compared to high BNR cases. Consequently, a high BNR results in reduced data diversity, making generated samples more likely to fall out of the data manifold, particularly during the early forward steps. When encountering out-of-manifold cases in sample generation, the diffusion network must handle degraded samples that were not seen during training, leading to less predictable outcomes and lower generation quality.
132
+
133
+ The analyses above reveal an inherent trade-off between model learning dynamics and data manifold shifts, where the choice of BNR value plays a crucial role in generation quality. As the diffusion process transitions from hot to cold, the model increasingly depends on leveraging the spectral dependency of images for learning. However, this shift also introduces the risk of divergence from the data manifold during generation, potentially resulting in degraded performance.
134
+
135
+ # 4.4 SELECTING BNR
136
+
137
+ ![](images/7ed3ba05c62467ca9439a1ca1189c51bd519e4c13d499c825113d2eaaacf703f.jpg)
138
+ Figure 4: Illustration of the connection between BNR and the data manifold. When comparing two different BNRs at the same blur level, a higher BNR corresponds to a smaller noise scale, resulting in a narrower noise-covering space, as shown on the right. In the deblurring (reverse) step, a sample is guided toward the deblurring target, representing the mean image of all possible paired outputs. It is important to note that during the forward process, a single low-quality (LQ) sample is typically paired with multiple high-quality (HQ) samples for training. Due to this ill-posed nature of the deblurring task, samples with higher BNR values are more likely to deviate from the data manifold during the transition. Once samples fall out of the manifold, the neural network struggles to produce accurate predictions, leading to a decline in generation quality.
139
+
140
+ We've discussed how BNR influences model behavior and the data manifold. A higher BNR allows the model to better exploit spectral dependency, simplifying learning by shifting the focus to deblurring. However, it also heightens the risk of samples deviating from the data manifold. Striking the right balance is crucial, as one must weigh the benefits of more straightforward neural network training against the risk of such deviations. This raises the question of whether a better BNR value exists that preserves the data manifold's integrity while enhancing model training.
141
+
142
+ An interesting observation from previous studies (van der Schaaf & van Hateren, 1996; Rissanen et al., 2023) is that the power spectral density of natural images follows an approximate power law, $1 / f^{\alpha}$ , where $\alpha \approx 2$ . In contrast, Gaussian white noise exhibits a flat frequency response across all frequency bands. This discrepancy results in a much lower signal-to-noise ratio (SNR) in high-frequency bands compared to low-frequency ones. When the SNR in these bands is sufficiently low, the observed signal becomes dominated by noise, helping to maintain the integrity of the data manifold while attenuating the image signal in these bands.
143
+
144
+ As shown in Fig. 3(b), our empirical findings indicate that selecting $\mathbf{BNR} = 0.5$ causes the image signal to begin attenuating when noise intensity exceeds the signal in these frequency bands, keeping the blurry-noisy signal comparable to the noisy signal in hot diffusion. Beyond this threshold, Fig. 3(c) illustrates that for a higher BNR value, the observed signal diverges from that in hot diffusion, as the image signal attenuates significantly before noise dominates those frequency bands.
145
+
146
+ # 5 EXPERIMENTS
147
+
148
+ # 5.1 IMAGE GENERATION
149
+
150
+ Datasets. We validate our proposed diffusion process on three widely used benchmarks: CIFAR-10 (Krizhevsky, 2009) $32 \times 32$ , FFHQ (Karras et al., 2019) $64 \times 64$ , and LSUN-church (Yu et al., 2016) $128 \times 128$ . These datasets were chosen to demonstrate the effectiveness of our method across various scenarios. The CIFAR-10 dataset contains $32 \times 32$ color images across 10 classes, allowing us to
151
+
152
+ Table 1: Quantitative results and comparison for $32 \times 32$ and $64 \times 64$ image generation tasks on CIFAR-10 (Krizhevsky, 2009) and FFHQ (Karras et al., 2019) datasets correspondingly. Lower FID and higher IS scores indicate better sample quality. NFE denotes the "Number of Function Evaluations". The best results are highlighted in bold; the second-best results are underlined.
153
+
154
+ <table><tr><td>Methods</td><td>NFE ↓</td><td>FID ↓</td><td>IS ↑</td></tr><tr><td colspan="4">Unconditional CIFAR-10</td></tr><tr><td>Cold Diffusion (Blur) (Bansal et al., 2022)</td><td>50</td><td>80.08</td><td>-</td></tr><tr><td>IHDM (Rissanen et al., 2023)</td><td>200</td><td>18.96</td><td>-</td></tr><tr><td>Blurring Diffusion (Hoogeboom &amp; Salimans, 2024)</td><td>1000</td><td>3.17</td><td>9.51</td></tr><tr><td>EDM (Karras et al., 2022)</td><td>35</td><td>1.97</td><td>9.78</td></tr><tr><td>EDM-ES (Ning et al., 2024)</td><td>35</td><td>1.95</td><td>-</td></tr><tr><td>STF (Xu et al., 2023b)</td><td>35</td><td>1.92</td><td>9.79</td></tr><tr><td>PFGM++ (Xu et al., 2023a)</td><td>35</td><td>1.91</td><td>-</td></tr><tr><td>Ours</td><td>35</td><td>1.85</td><td>10.02</td></tr><tr><td colspan="4">Class-conditional CIFAR-10</td></tr><tr><td>EDM (Karras et al., 2022)</td><td>35</td><td>1.79</td><td>-</td></tr><tr><td>EDM-ES (Ning et al., 2024)</td><td>35</td><td>1.80</td><td>-</td></tr><tr><td>PFGM++ (Xu et al., 2023a)</td><td>35</td><td>1.74</td><td>-</td></tr><tr><td>Ours</td><td>35</td><td>1.68</td><td>10.19</td></tr><tr><td colspan="4">FFHQ 64 × 64</td></tr><tr><td>EDM (Karras et al., 2022)</td><td>79</td><td>2.53</td><td>-</td></tr><tr><td>PFGM++ (Xu et al., 2023a)</td><td>79</td><td>2.43</td><td>-</td></tr><tr><td>Ours</td><td>79</td><td>2.29</td><td>3.41</td></tr></table>
155
+
156
+ Table 2: Quantitative results and comparisons for $128 \times 128$ image generation tasks on the unconditional LSUN-church (Yu et al., 2016) dataset. For a fair comparison, we evaluate sample quality using the same number of samples as in previous studies.
157
+
158
+ <table><tr><td>Methods</td><td>NFE ↓</td><td>FID ↓</td></tr><tr><td colspan="3">Number of samples = 10k</td></tr><tr><td>Denoising Diffusion (Hoogeboom &amp; Salimans, 2024)</td><td>1000</td><td>4.68</td></tr><tr><td>Blurring Diffusion (Hoogeboom &amp; Salimans, 2024)</td><td>1000</td><td>3.88</td></tr><tr><td>Ours</td><td>511</td><td>3.47</td></tr><tr><td colspan="3">Number of samples = 50k</td></tr><tr><td>IHDM (Rissanen et al., 2023)</td><td>400</td><td>45.06</td></tr><tr><td>Ours</td><td>511</td><td>2.56</td></tr></table>
159
+
160
+ Table 3: Ablation study on the impact of different BNR values for CIFAR-10, with a fixed number of sampling steps (NFE=35).
161
+
162
+ <table><tr><td>BNR</td><td>FID ↓</td><td>IS ↑</td></tr><tr><td>0 (EDM (Karras et al., 2022))</td><td>1.97</td><td>9.78</td></tr><tr><td>0.1</td><td>1.97</td><td>9.96</td></tr><tr><td>0.3</td><td>1.90</td><td>10.02</td></tr><tr><td>0.5</td><td>1.85</td><td>10.02</td></tr><tr><td>0.65</td><td>1.91</td><td>10.00</td></tr><tr><td>1</td><td>2.01</td><td>9.96</td></tr><tr><td>2</td><td>2.57</td><td>9.89</td></tr><tr><td>10</td><td>11.97</td><td>8.51</td></tr></table>
163
+
164
+ test both unconditional and class-conditional image generation. For the FFHQ and LSUN-church datasets, we evaluate the model in unconditional settings. The FFHQ $64 \times 64$ dataset comprises 70,000 images of human faces, which have a higher degree of shared structure compared to general scene datasets. The LSUN-church $128 \times 128$ dataset features images of church scenes, enabling us to validate our method in higher-resolution scenarios.
165
+
166
+ Implementation Details. For training, we adopt the improved DDPM++/NCSN++ (Song et al., 2021) network architectures, training strategies, and hyperparameters from the state-of-the-art diffusion model, EDM (Karras et al., 2022). Modifications are made to enable the network to accept two conditioning signals—the blur and noise levels—and we double the output channels to produce predictors for deblurring and denoising, respectively. In our current design, the two networks share most components, so altering the BNR keeps the model capacity nearly constant, ensuring a fair comparison. For sampling, we adapt Heun's $2^{nd}$ solver, following EDM (Karras et al., 2022). More details are available in Appendix B.
167
+
168
+ Performance Comparison. To evaluate image generation quality, we use two commonly adopted benchmarks: Fréchet Inception Distance (FID) (Heusel et al., 2018) and Inception Score (IS) (Salimans et al., 2016). The FID score measures the distance between the generated and reference datasets; a lower FID score indicates greater similarity between the two, reflecting better recovery of the data distribution by the generative model. The Inception Score is computed by passing generated images through a pre-trained classifier. The optimal IS is achieved when the entropy of the label distribution for the generated images is minimized and the predictions are evenly spread across classes, indicating sharp and diverse generated images.
169
+
170
+ Following established procedures, we sample 50,000 images over three rounds and report the minimum scores to mitigate random variation effects. As shown in Tab. 1, we assess sample quality using FID and IS alongside the number of function evaluations (NFE) during sampling—a metric closely related to the sampling speed of diffusion-based methods. Our approach significantly enhances the performance of the baseline model, EDM, across both CIFAR-10 and FFHQ datasets, regardless of their differing characteristics. This improvement is evident in both conditional and unconditional settings on CIFAR-10, outperforming recent methods designed to enhance EDM.
171
+
172
+ ![](images/f8c99b5532bdd5360be0a4f6028c89a784241c19ce70035fe3ad3b243ffaf1f9.jpg)
173
+ Figure 5: Illustration of sample quality corresponding to different BNR and NFE. Each curve represents a specific BNR value. As shown in the chart, higher BNR values require more sampling steps to achieve better sample quality.
174
+
175
+ ![](images/820807c89560a388251539be4e680ec0701ce0ce070e8da0e06973687c116bf5.jpg)
176
+ Figure 6: Comparison of blur and noise schedules with previous methods. Most related studies, except the hot diffusion, selected higher BNR values in their schedules, leading to out-of-manifold issues. Details in Appendix A.2.
177
+
178
+ Moreover, in addition to comparing our approach with techniques aimed at improving hot diffusion, we evaluate it against Cold Diffusion and intermediate methods such as IHDM (Rissanen et al., 2023) and Blurring Diffusion (Hoogeboom & Salimans, 2024). Our method not only achieves superior sample quality but also requires significantly fewer sampling steps for image generation. In Tab. 2, we also show our method outperforms the previous methods on a more complex and higher-resolution dataset, LSUN-church $128 \times 128$ . To make a fair comparison, we use the same number of samples for the FID evaluation since the FID score is sensitive to the number of samples.
179
+
180
+ # 5.2 ANALYSIS OF DIFFERENT BNRs AND SCHEDULING
181
+
182
+ Relation between BNRs and Data Manifolds. We first validate our assumptions by testing various BNR values ranging from 0 to 10. As shown in Tab. 3, using the same number of sampling steps as the baseline method, EDM (Karras et al., 2022), the sample quality improves as the BNR value increases up to 0.5. Beyond this threshold, performance rapidly declines, eventually falling below the baseline. These results support our hypothesis in Sec. 4.4 that a BNR value of 0.5, guided by spectral analysis, can effectively balance the trade-off between mitigating negative impacts on data manifold shifts and enhancing model learning. As discussed in Sec. 4.4, when the BNR value exceeds 0.5, the data manifold tends to shift significantly. Consequently, following the same sampling steps, the samples are more likely to fall outside the manifold, resulting in lower generation quality. This suggests that higher BNR values may require additional sampling steps to mitigate out-of-manifold issues and enhance sample quality. In Fig. 5, we further assess the impact of different BNR values and sampling steps on sample quality. The results confirm that higher BNR values indeed demand more sampling steps to reach better sample quality, supporting our analysis.
183
+
184
+ Revisiting the BNR Scheduling in Prior Studies. We conduct experiments to compare the BNR schedules from prior studies with our proposed method. Specifically, we re-implement the BNR schedule from Blurring Diffusion (Hoogeboom & Salimans, 2024) under the same experimental conditions to eliminate the effects of differing parameterization techniques. The results, presented in Tab. 4, indicate that Blurring Diffusion's BNR schedule results in poor generation quality when using fewer sampling steps, although quality improves significantly with more steps. This highlights a limitation in Blurring Diffusion's BNR scheduling, as depicted in Fig. 6, where higher BNR values necessitate additional sampling steps to prevent the reverse process from deviating from the data manifold. Consequently, fewer steps result in poorer sample quality. These findings help explain why previous approaches, such as cool diffusion, have struggled to generate high-quality samples, particularly under limited sampling budgets.
185
+
186
+ Table 4: We re-implement Blurring Diffusion using our parameterization and training scheme on CIFAR-10. Results marked with * are those reported by Hoogeboom & Salimans (2024).
187
+
188
+ <table><tr><td>BNR Schedule</td><td>NFE ↓</td><td>FID ↓</td><td>IS ↑</td></tr><tr><td rowspan="7">Blurring Diffusion (Hoogeboom &amp; Salimans, 2024)</td><td>35</td><td>12.97</td><td>8.57</td></tr><tr><td>79</td><td>4.13</td><td>9.27</td></tr><tr><td>159</td><td>2.91</td><td>9.46</td></tr><tr><td>239</td><td>2.77</td><td>9.52</td></tr><tr><td>319</td><td>2.73</td><td>9.54</td></tr><tr><td>399</td><td>2.71</td><td>9.54</td></tr><tr><td>999</td><td>2.68</td><td>9.56</td></tr><tr><td>Blurring Diffusion* (Hoogeboom &amp; Salimans, 2024)</td><td>1000</td><td>3.17</td><td>9.51</td></tr></table>
189
+
190
+ Table 5: An ablation study on different parameterization of $\mathbf{x}_0$ and $\mathbf{V}M_{\alpha_t}\mathbf{V}^T\mathbf{x}_0$ in Eq. (6). We fix a constant number of sampling steps (NFE=35) to investigate various parameterizations for the generation task using CIFAR-10.
191
+
192
+ <table><tr><td rowspan="2"></td><td colspan="2">Training Objectives</td><td colspan="2">Parameterization of</td><td rowspan="2">FID ↓</td></tr><tr><td>Rθ</td><td>Dθ</td><td>x0</td><td>VMαtVTx0</td></tr><tr><td>(a)</td><td>x0</td><td>-</td><td>Rθ</td><td>VMαtVTRθ</td><td>13.19</td></tr><tr><td>(b)</td><td>-</td><td>VMαtVTx0</td><td>V(Mαt)-1VTDθ</td><td>Dθ</td><td>9.09</td></tr><tr><td>(c)</td><td>x0</td><td>VMαtVTx0</td><td>Rθ</td><td>Dθ</td><td>1.98</td></tr><tr><td>(d)</td><td>xrest</td><td>VMαtVTx0</td><td>V(I-Mαt)-1VTRθ</td><td>Dθ</td><td>1.85</td></tr></table>
193
+
194
+ # 5.3 ABLATION STUDIES ON TRAINING OBJECTIVES AND VARIATIONS OF PARAMETERIZATIONS
195
+
196
+ In Sec. 4.2, we reformulate the reverse function, Eq. (6), to simplify the neural network's training objectives through a divide-and-conquer approach. This method separates the task of predicting the clean signal $x_0$ into two sub-tasks: denoising to a blurry signal and predicting the residual signal for deblurring. In this subsection, we explore various parameterization strategies derived from Eq. (6) and evaluate their performances, demonstrating the advantages of our divide-and-conquer strategy for model learning.
197
+
198
+ As shown in Tab. 5(a), using a single branch to directly predict the entire clean signal results in significantly poorer generation quality, as this task proves too challenging. In Tab. 5(b), shifting the target to learn a blurry signal yields a slight improvement in sample quality since this target is easier to model; however, it still faces issues with inaccuracies in high-frequency components, which may be exacerbated during sampling, occasionally resulting in noisy patterns in the generated samples. In Tab. 5(c), employing two branches to predict both clean and blurry signals effectively addresses these challenges, leading to substantially better results, though they remain comparable to those of hot diffusion models as EDM (Karras et al., 2022). Finally, in Tab. 5(d), the proposed divide-and-conquer strategy further improves performance, benefiting especially from the BNR schedule and the parameterization. More visual examples are provided in Fig. 11.
199
+
200
+ # 6 CONCLUSION
201
+
202
+ In this paper, we introduce a unified Warm Diffusion framework that effectively bridges the gap between hot and cold diffusion models while addressing their inherent limitations. Our analysis reveals that hot diffusion models underutilize the spectral dependency of images, whereas cold diffusion models risk reverse sampling steps that deviate from the data manifolds. By examining the Blur-to-Noise Ratio (BNR), we uncover its significant influence on model behavior and data manifolds. This insight enables us to propose a strategy for balancing the trade-off between hot and cold diffusion, ultimately enhancing diffusion models for image generation. Experimental results across various benchmarks validate the effectiveness of our approach, demonstrating improvements in sample quality over state-of-the-art diffusion models.
203
+
204
+ # ACKNOWLEDGMENTS
205
+
206
+ This work was financially supported in part (project number: 112UA10019) by the Co-creation Platform of the Industry Academia Innovation School, NYCU, under the framework of the National Key Fields Industry-University Cooperation and Skilled Personnel Training Act, from the Ministry of Education (MOE) and industry partners in Taiwan. It also supported in part by the National Science and Technology Council, Taiwan, under Grant NSTC-112-2221-E-A49-089-MY3, Grant NSTC-110-2221-E-A49-066-MY3, Grant NSTC-111-2634-F-A49-010, Grant NSTC-112-2425-H-A49-001, and in part by the Higher Education Sprout Project of the National Yang Ming Chiao Tung University and the Ministry of Education (MOE), Taiwan. We also would like to express our gratitude for the support from MediaTek Inc, Hon Hai Research Institute (HHRI), E.SUN Financial Holding Co Ltd, Advantech Co Ltd, Industrial Technology Research Institute (ITRI)
207
+
208
+ # REFERENCES
209
+
210
+ Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S. Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Cold diffusion: Inverting arbitrary image transforms without noise, 2022. URL https://arxiv.org/abs/2208.09392.
211
+ Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning, volume 4. Springer, 2006.
212
+ Giannis Daras, Maurizio Delbracio, Hossein Talebi, Alexandros G. Dimakis, and Peyman Milanfar. Soft diffusion: Score matching for general corruptions, 2022. URL https://arxiv.org/abs/2209.05442.
213
+ Maurizio Delbracio and Peyman Milanfar. Inversion by direct iteration: An alternative to denoising diffusion for image restoration, 2024. URL https://arxiv.org/abs/2303.11435.
214
+ Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018. URL https://arxiv.org/abs/1706.08500.
215
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models, 2020. URL https://arxiv.org/abs/2006.11239.
216
+ Emiel Hoogeboom and Tim Salimans. Blurring diffusion models, 2024. URL https://arxiv.org/abs/2209.05557.
217
+ Aapo Hyvärinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005.
218
+ Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4396-4405, 2019. doi: 10.1109/CVPR.2019.00453.
219
+ Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models, 2022. URL https://arxiv.org/abs/2206.00364.
220
+ Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. URL https://apisemantic scholar.org/CorpusID:18268744.
221
+ Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A. Theodorou, Weili Nie, and Anima Anandkumar. I²sb: Image-to-image schrödinger bridge, 2023. URL https://arxiv.org/abs/2302.05872.
222
+ Jiawei Liu, Qiang Wang, Huijie Fan, Yinong Wang, Yandong Tang, and Liangqiong Qu. Residual denoising diffusion models, 2024. URL https://arxiv.org/abs/2308.13712.
223
+ Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjolund, and Thomas B. Schön. Image restoration with mean-reverting stochastic differential equations, 2023. URL https://arxiv.org/abs/2301.11699.
224
+ Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models, 2021. URL https://arxiv.org/abs/2102.09672.
225
+ Mang Ning, Mingxiao Li, Jianlin Su, Albert Ali Salah, and Itir Onal Ertugrul. Elucidating the exposure bias in diffusion models, 2024. URL https://arxiv.org/abs/2308.15321.
226
+ Severi Rissanen, Markus Heinonen, and Arno Solin. Generative modelling with inverse heat dissipation, 2023. URL https://arxiv.org/abs/2206.13397.
227
+
228
+ Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans, 2016. URL https://arxiv.org/abs/1606.03498.
229
+ Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics, 2015. URL https://arxiv.org/abs/1503.03585.
230
+ Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models, 2022. URL https://arxiv.org/abs/2010.02502.
231
+ Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/3001ef257407d5a371a96dcd947c7d93-Paper.pdf.
232
+ Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations, 2021. URL https://arxiv.org/abs/2011.13456.
233
+ A. van der Schaaf and J.H. van Hateren. Modelling the power spectra of natural images: Statistics and information. Vision Research, 36(17):2759-2770, 1996. ISSN 0042-6989. doi: https://doi.org/10.1016/0042-6989(96)00002-8. URL https://www.sciencedirect.com/science/article/pii/0042698996000028.
234
+ Yilun Xu, Ziming Liu, Yonglong Tian, Shangyuan Tong, Max Tegmark, and Tommi Jaakkola. Pfgm++: Unlocking the potential of physics-inspired generative models, 2023a. URL https://arxiv.org/abs/2302.04265.
235
+ Yilun Xu, Shangyuan Tong, and Tommi Jaakkola. Stable target field for reduced variance score estimation in diffusion models, 2023b. URL https://arxiv.org/abs/2302.00670.
236
+ Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop, 2016. URL https://arxiv.org/abs/1506.03365.
237
+ Zongsheng Yue, Jianyi Wang, and Chen Change Loy. Ressift: Efficient diffusion model for image superresolution by residual shifting, 2023. URL https://arxiv.org/abs/2307.12348.
238
+
239
+ # A DERIVATION
240
+
241
+ # A.1 PROOF
242
+
243
+ In this section, we prove that the inference transition distribution defined in Eq. (5) matches the marginal distribution defined in Eq. (4), using an induction hypothesis.
244
+
245
+ # Base Case:
246
+
247
+ For $t = T$ , it is given that
248
+
249
+ $$
250
+ q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {T}, \beta_ {T}} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {T}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}, \beta_ {T} ^ {2} \boldsymbol {I}\right), \tag {11}
251
+ $$
252
+
253
+ so the base case holds.
254
+
255
+ # Induction Hypothesis:
256
+
257
+ Assume that for $t$ , the following holds
258
+
259
+ $$
260
+ q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}, \beta_ {t} ^ {2} \boldsymbol {I}\right). \tag {12}
261
+ $$
262
+
263
+ We now aim to show that it holds for $t - 1$ .
264
+
265
+ # Inductive Step:
266
+
267
+ We have
268
+
269
+ $$
270
+ q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {t - 1}, \beta_ {t - 1}} \mid \boldsymbol {x} _ {0}\right) = \int_ {\boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}}} q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}} \mid \boldsymbol {x} _ {0}\right) q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {t - 1}, \beta_ {t - 1}} \mid \boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}}, \boldsymbol {x} _ {0}\right) d \boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}} \tag {13}
271
+ $$
272
+
273
+ and also the inference transition distribution
274
+
275
+ $$
276
+ q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {t - 1}, \beta_ {t - 1}} \mid \boldsymbol {x} _ {\alpha_ {t}, \beta_ {t}}, \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t - 1}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0} + \sqrt {\beta_ {t - 1} ^ {2} - \sigma_ {t} ^ {2}} \left(\frac {\boldsymbol {x} _ {\alpha_ {t} , \beta_ {t}} - \boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}}{\beta_ {t}}\right), \sigma_ {t} ^ {2} \boldsymbol {I}\right). \tag {14}
277
+ $$
278
+
279
+ Following Bishop & Nasrabadi (2006) (2.115), we have that $q_{\sigma}(\boldsymbol{x}_{\alpha_{t-1}, \beta_{t-1}} | \boldsymbol{x}_0)$ is Gaussian, denoted as $\mathcal{N}(\boldsymbol{\mu}_{t-1}, \boldsymbol{\Sigma}_{t-1})$ where
280
+
281
+ $$
282
+ \begin{array}{l} \boldsymbol {\mu} _ {t - 1} = \boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t - 1}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0} + \sqrt {\beta_ {t - 1} ^ {2} - \sigma_ {t} ^ {2}} \left(\frac {\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0} - \boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}}{\beta_ {t}}\right) \tag {15} \\ = \boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t - 1}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0} \\ \end{array}
283
+ $$
284
+
285
+ and
286
+
287
+ $$
288
+ \boldsymbol {\Sigma} _ {t - 1} = \sigma_ {t} ^ {2} \boldsymbol {I} + \left(\frac {\beta_ {t - 1} ^ {2} - \sigma_ {t} ^ {2}}{\beta_ {t} ^ {2}}\right) \beta_ {t} ^ {2} \boldsymbol {I} = \beta_ {t - 1} ^ {2} \boldsymbol {I}. \tag {16}
289
+ $$
290
+
291
+ Therefore we have
292
+
293
+ $$
294
+ q _ {\sigma} \left(\boldsymbol {x} _ {\alpha_ {t - 1}, \beta_ {t - 1}} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {V} \boldsymbol {M} _ {\alpha_ {t - 1}} \boldsymbol {V} ^ {T} \boldsymbol {x} _ {0}, \beta_ {t - 1} ^ {2} \boldsymbol {I}\right). \tag {17}
295
+ $$
296
+
297
+ # Conclusion:
298
+
299
+ The marginal distribution $q_{\sigma}(\pmb{x}_{\alpha_t,\beta_t}|\pmb{x}_0) = \mathcal{N}(\pmb{V}\pmb{M}_{\alpha_t}\pmb{V}^T\pmb{x}_0,\beta_t^2\pmb{I})$ holds for all $t$ according to the derivation above.
300
+
301
+ # A.2 BNR SCHEDULE OF PRIOR STUDIES
302
+
303
+ Here, we further analyze the BNR schedule of prior studies as depicted in Fig. 6. For Blurring Diffusion (Hoogeboom & Salimans, 2024), we begin by examining its blurring and noisig schedule indexed by $t \in \{1, 2, \dots, T\}$ . As defined in (Hoogeboom & Salimans, 2024), the blurring schedule, $\alpha_{t}$ , is parameterized as:
304
+
305
+ $$
306
+ \alpha_ {t} = 2 0 \sin^ {2} \left(\frac {\pi t}{2 T}\right). \tag {18}
307
+ $$
308
+
309
+ Given that a cosine noise schedule is applied—scaling the signal throughout the diffusion process—the noising schedule, $\beta_{t}$ , can be parameterized from the perspective of the signal-to-noise ratio (SNR) as:
310
+
311
+ $$
312
+ \beta_ {t} \approx \frac {\sin \left(\frac {\pi t}{2 T}\right)}{\cos \left(\frac {\pi t}{2 T}\right)} = \tan \left(\frac {\pi t}{2 T}\right). \tag {19}
313
+ $$
314
+
315
+ Thus the BNR schedule can be written as:
316
+
317
+ $$
318
+ \begin{array}{l} B N R _ {t} = \frac {\alpha_ {t}}{\beta_ {t}} = \frac {2 0 \sin^ {2} (\frac {\pi t}{2 T})}{\tan (\frac {\pi t}{2 T})} \\ = 2 0 \sin \left(\frac {\pi t}{2 T}\right) \cos \left(\frac {\pi t}{2 T}\right) \tag {20} \\ = 1 0 \sin (\frac {\pi t}{T}). \\ \end{array}
319
+ $$
320
+
321
+ Regarding other related work, IHDM (Rissanen et al., 2023) introduces a diffusion process by applying a small, constant amount of noise while progressively increasing blur levels. As a result, the BNR schedule for IHDM is represented as a vertical line in Fig. 6. In contrast, Bansal et al. (2022) develop Cold Diffusion by constructing the diffusion process entirely without noise, leading to a BNR value of $\infty$ .
322
+
323
+ # B ADDITIONAL DETAILS OF OUR IMPLEMENTATION
324
+
325
+ We adopt the network architectures, training techniques, and hyperparameters from the state-of-the-art diffusion model, EDM (Karras et al., 2022), making only minor modifications to preserve constant model capacity. In this section, we first provide detailed information on the architecture and training settings, demonstrating that the observed improvement in generation quality is not due to a larger or more complex network design nor to hyperparameter fine-tuning. Subsequently, we present the algorithms for both training and sampling within our proposed diffusion process to facilitate a clearer understanding of the methodology.
326
+
327
+ # B.1 NETWORK ARCHITECTURES AND HYPERPARAMETERS
328
+
329
+ In Tab. 6, we list the hyperparameters used in our experiments. For CIFAR-10 and FFFQ, we adopt the same settings as EDM (Karras et al., 2022), without tuning for optimal hyperparameters. For LSUN-church, we follow the network architecture and settings from Blurring Diffusion to ensure a fair comparison. Across all datasets, we apply slight modifications to the network architecture, as depicted in Fig. 7. These modifications include: (1) incorporating an additional conditioning signal for the blurring level, using an embedding branch fused with the noise-level embedding, and (2) doubling the output channels of the neural network and splitting them to compute separate losses, allowing the model to predict both the residual high-frequency detail and the underlying blurry signal. These two changes result in less than a $0.15\%$ increase in model size, as shown in Tab. 6, indicating that the observed improvement in sample quality stems from the proposed method rather than the architecture.
330
+
331
+ Table 6: Hyperparameters and model sizes used in the experiments.
332
+
333
+ <table><tr><td rowspan="2">Hyperparameter</td><td colspan="2">CIFAR-10</td><td colspan="2">FFHQ 64 × 64</td><td colspan="2">LSUN-church 128 × 128</td></tr><tr><td>Baseline</td><td>Ours</td><td>Baseline</td><td>Ours</td><td>Baseline</td><td>Ours</td></tr><tr><td>Number of GPUs</td><td>8</td><td>8</td><td>8</td><td>8</td><td>8</td><td>8</td></tr><tr><td>Duration (Mimg)</td><td>200</td><td>200</td><td>200</td><td>200</td><td>512</td><td>200</td></tr><tr><td>Minibatch size</td><td>512</td><td>512</td><td>256</td><td>256</td><td>256</td><td>256</td></tr><tr><td>Learning rate ×10-4</td><td>10</td><td>10</td><td>2</td><td>2</td><td>1</td><td>1</td></tr><tr><td>LR ramp-up (Mimg)</td><td>10</td><td>10</td><td>10</td><td>10</td><td>10</td><td>10</td></tr><tr><td>EMA half-life (Mimg)</td><td>0.5</td><td>0.5</td><td>0.5</td><td>0.5</td><td>0.89</td><td>0.5</td></tr><tr><td>Dropout probability</td><td>0.13</td><td>0.13</td><td>0.05</td><td>0.05</td><td>0.1</td><td>0.1</td></tr><tr><td>Channel multiplier</td><td>128</td><td>128</td><td>128</td><td>128</td><td>64</td><td>64</td></tr><tr><td>Channels per resolution</td><td>2-2-2</td><td>2-2-2</td><td>1-2-2-2</td><td>1-2-2-2</td><td>1-2-4-6-8</td><td>1-2-4-6-8</td></tr><tr><td>Residual blocks per resolution</td><td>4</td><td>4</td><td>4</td><td>4</td><td>3</td><td>3</td></tr><tr><td>Attention resolutions</td><td>{16}</td><td>{16}</td><td>{16}</td><td>{16}</td><td>{8, 16, 32}</td><td>{8, 16, 32}</td></tr><tr><td>Attention heads</td><td>1</td><td>1</td><td>1</td><td>1</td><td>4-6-8</td><td>4-6-8</td></tr><tr><td>Model size</td><td>55.734 M</td><td>55.741 M</td><td>62.761 M</td><td>62.765 M</td><td>117.955 M</td><td>117.957 M</td></tr></table>
334
+
335
+ ![](images/db28478f7944b629c22cac527c1a40a849a3ccc157503476169e1105e3908cee.jpg)
336
+ Figure 7: Our model architecture includes an additional embedding branch to incorporate a conditioning signal for the blurring and noising levels. Furthermore, to enable the network to simultaneously perform both deblurring and denoising, the output channels are doubled and split to handle each task separately.
337
+
338
+ # B.2 PROCEDURE OF TRAINING AND SAMPLING
339
+
340
+ We present the training procedure in Algorithm 1, which follows the training scheme of the state-of-the-art diffusion model EDM (Karras et al., 2022), with slight modifications. To prepare the training data pairs, after sampling a noise level $\beta$ , we further determine a corresponding blur level $\alpha$ based on the predefined BNR value. The image signals are then transformed into blurry and noisy signals according to the sampled blur and noise levels. The decoding neural network is then trained to simultaneously denoise and deblur these signals, conditioned on the blur and noise levels. Unlike methods that require two separate function evaluations for denoising and deblurring, we utilize a single forward pass to predict both signals at once and split the output into two branches, each handling one task.
341
+
342
+ In Algorithm 2, we outline the sampling procedure of our proposed diffusion process, integrated with Heun's $2^{nd}$ order sampling method, similar to EDM (Karras et al., 2022). Following a predefined sampling schedule, for each reverse step, we first use the denoiser's prediction to guide the sample toward a blurry prediction, then apply the deblurring prediction to move the sample toward a sharper state, as outlined in Eq. (6). To integrate Heun's $2^{nd}$ order method, we temporarily update the signal $\pmb{x}_i$ to obtain $\pmb{x}_{i-1}$ , then repeat the process to obtain a refined update direction. The two predictions from consecutive iterations are averaged to correct the update step, yielding the final sample $\pmb{x}_{i-1}$ . This process is applied throughout, except for the last step, ensuring more accurate and stable sampling.
343
+
344
+ Algorithm 1 Training Phase
345
+ 1: Require: Hyperparameters $\{P_{mean}, P_{std}, BNR\}$
346
+ 2: Initialize Neural network $F_{\theta}$
347
+ 3: repeat
348
+ 4: $\pmb{x}_0 \sim q(\pmb{x}_0)$ ▷ Sample from training dataset
349
+ 5: $\pmb{\epsilon} \sim \mathcal{N}(\pmb{0}, \pmb{I})$ ▷ Sample a Gaussian noise
350
+ 6: $\ln(\beta) \sim \mathcal{N}(P_{mean}, P_{std}^2)$ ▷ Sample a noise level
351
+ 7: $\alpha = BNR \times \beta$ ▷ Get corresponding blur level from given noise level
352
+ 8: $\pmb{x}_{\alpha,0} = \pmb{V} \pmb{M}_{\alpha} \pmb{V}^T \pmb{x}_0$ ▷ Apply blurring operation
353
+ 9: $\pmb{x}_{\alpha,\beta} = \pmb{x}_{\alpha,0} + \beta \pmb{\epsilon}$ ▷ Add noise
354
+ 10: $\pmb{x}_{res} = \pmb{x}_0 - \pmb{x}_{\alpha,0}$ ▷ Compute the residual signal
355
+ 11: $\hat{D}_{\theta}, \hat{R}_{\theta} = F_{\theta}(\pmb{x}_{\alpha,\beta}; \alpha, \beta)$ ▷ Split the output of the neural network
356
+ 12: Take gradient step on
357
+ 13: $\nabla_{\theta} \lambda(\beta)(\|\hat{D}_{\theta} - \pmb{x}_{\alpha,0}\|^2 + \|\hat{R}_{\theta} - \pmb{x}_{res}\|^2)$ ▷ Jointly learn denoising and deblurring
358
+ 14: until converged
359
+
360
+ Algorithm 2 Generation phase: Deterministic sampling with Heun's $2^{nd}$ order method
361
+ 1: Require: Neural network $F_{\theta}$ , Sampling schedule $\{(\alpha_0,\beta_0),(\alpha_1,\beta_1),\ldots ,(\alpha_N,\beta_N)\}$
362
+ 2: sample $\pmb{x}_N\sim \mathcal{N}(\pmb {0},\beta_N^2\pmb {I})$
363
+ 3: for $i\in \{N,N - 1,\dots ,1\}$ do
364
+ 4: $\hat{D}_{\theta},\hat{R}_{\theta} = F_{\theta}(\pmb{x}_{i};\alpha_{i},\beta_{i})$
365
+ 5: $\hat{\epsilon} = \frac{\pmb{x}_i - \hat{D}_{\theta}}{\beta_i}$
366
+ 6: $\hat{\pmb{x}}_0 = \pmb {V}(\pmb {I} - M_{\alpha_i})^{-1}\pmb {V}^T\hat{R}_{\theta}$
367
+ 7: $\pmb{x}_{i - 1} = \pmb {x}_i + V(M_{\alpha_{i - 1}} - M_{\alpha_i})\pmb {V}^T\hat{\pmb{x}}_0 + (\beta_{i - 1} - \beta_i)\hat{\epsilon}$
368
+ 8: if $i\neq 1$ then
369
+ 9: $\hat{D}_{\theta}',\hat{R}_{\theta}' = F_{\theta}(\pmb{x}_{i - 1};\alpha_{i - 1},\beta_{i - 1})$
370
+ 10: $\hat{\epsilon} ' = \frac{\pmb{x}_{i - 1} - \hat{D}_{\theta}'}{\beta_{i - 1}}$
371
+ 11: $\hat{\pmb{x}}_0' = V(\pmb {I} - M_{\alpha_{i - 1}})^{-1}\pmb {V}^T\hat{R}_{\theta}'$
372
+ 12: $\pmb{x}_{i - 1} = \pmb {x}_i + V(M_{\alpha_{i - 1}} - M_{\alpha_i})\pmb {V}^T(\frac{\hat{\pmb{x}}_0 + \hat{\pmb{x}}_0'}{2}) + (\beta_{i - 1} - \beta_i)(\frac{\hat{\epsilon} + \hat{\epsilon}'}{2})$
373
+ 13: end if
374
+ 14: end for
375
+ 15: return $\pmb{x}_0$
376
+
377
+ # C ADDITIONAL EXPERIMENTS
378
+
379
+ # C.1 SAMPLING SCHEDULES FOR IMAGE GENERATION
380
+
381
+ This section explains how the noise level schedule affects the reverse process in our diffusion model. Before starting the reverse process, a predefined sequence of blur and noise levels is needed for each reverse step. Since the blur level in our method is controlled by the BNR and linked to the noise level, we only need to focus on the schedule of the noise level.
382
+
383
+ Following the formula proposed by Karras et al. (2022), the sequence of noise levels can be formulated as:
384
+
385
+ $$
386
+ \beta_ {0 < i \leq N} = \left(\beta_ {\max } ^ {\frac {1}{\rho}} + \frac {N - i}{N - 1} \left(\beta_ {\min } ^ {\frac {1}{\rho}} - \beta_ {\max } ^ {\frac {1}{\rho}}\right)\right) ^ {\rho}, \beta_ {0} = 0, \tag {21}
387
+ $$
388
+
389
+ where $\rho$ controls the emphasis on different phases of noise levels. Setting $\rho = 1$ corresponds to uniform discretization, while a higher $\rho$ places more emphasis on lower noise levels, resulting in more sampling steps in this phase.
390
+
391
+ Empirically, Karras et al. (2022) found that $\rho = 7$ worked well across various tasks, and adopted this value in all experiments. However, we discovered that our method benefits from using a higher $\rho$ value, as illustrated in Fig. 8. This improvement arises because our model learns a simpler target in the early stages of the reverse diffusion process, thereby reducing prediction error. By shifting the focus to lower noise levels, more sampling
392
+
393
+ ![](images/c325a5b46a9ff6d775f389dcc3e48cab27d3638220eced0e78301ac3664b3b99.jpg)
394
+ (a) CIFAR-10
395
+
396
+ ![](images/6341ee8f0c0984989fc23409d1326ba8707b4b8e9812058a78d969f6a6614089.jpg)
397
+ (b) FFHQ $64 \times 64$
398
+ Figure 8: Sampling schedule and sample quality. Compared to the baseline noise schedule proposed by EDM (i.e., $\rho = 7$ ), our method benefits from using a higher $\rho$ value. With a fixed number of sampling steps, this adjustment enables our sampling process to focus more on the later stages of the reverse process, where high-frequency detail are generated. As a result, the sample quality is improved across both CIFAR-10 and FFHQ datasets.
399
+
400
+ Table 7: Quantitative comparisons of FID and IS scores on unconditional CIFAR-10 for different BNR values, evaluated using the DDPM (Ho et al., 2020) architecture.
401
+
402
+ <table><tr><td>Methods</td><td>BNR</td><td>FID ↓</td><td>IS ↑</td></tr><tr><td>DDPM (Ho et al., 2020)</td><td>0</td><td>3.17</td><td>9.46</td></tr><tr><td>Ours</td><td>0.3</td><td>3.11</td><td>9.48</td></tr><tr><td>Ours</td><td>0.5</td><td>3.03</td><td>9.51</td></tr></table>
403
+
404
+ steps are allocated to generate high-frequency detail (i.e., the later stages of the reverse process), ultimately enhancing the quality of the generated samples.
405
+
406
+ # C.2 EXPERIMENTAL RESULTS ON DDPM ARCHITECTURE
407
+
408
+ In the main manuscript, we demonstrate the effectiveness of our proposed approach using the stronger baseline, improved DDPM++/NSCN++, introduced by Karras et al. (2022). In this section, we extend our experiments to the network architecture proposed in DDPM (Ho et al., 2020). The quantitative comparison on unconditional CIFAR-10 is presented in Tab. 7. Our method achieves a lower FID score and a higher IS, indicating improved image distribution modeling and enhanced sample quality. The consistent performance improvement across different network architectures further demonstrates the generalizability of our proposed method. Additionally, we conduct experiments with a lower BNR value, as reported in Tab. 7, and the trend of the results aligns with those presented in Tab. 3. These findings highlight the enhanced generation quality achieved by transitioning from Hot Diffusion to our proposed Warm Diffusion. This improvement stems from effectively leveraging the spectral dependency of images while addressing out-of-manifold issues by avoiding excessive blurring in the diffusion process.
409
+
410
+ # D ADDITIONAL RESULTS
411
+
412
+ In this section, we provide additional visual comparisons to present more qualitative results. Fig. 9 showcases generated samples for various BNR values, as discussed in Tab. 3. Fig. 10 illustrates samples produced by our re-implementation of Blurring Diffusion (Hoogeboom & Salimans, 2024), as outlined in Tab. 4. A visual comparison of different parameterization techniques, discussed in Tab. 5, is shown in Fig. 11. Furthermore, uncurated (non-cherry-picked) samples from all datasets used in our experiments are presented in Fig. 12.
413
+
414
+ ![](images/5e644282139dbe44b1159adf8b539823c5ea9b076239487723fa887a40b71228.jpg)
415
+ EDM $(\mathrm{BNR} = 0)$
416
+ FID 1.97 NFE 35
417
+ Figure 9: Visual comparison of different BNR values with a constant number of sampling steps. For cases where $BNR > 0.5$ , using the same number of sampling steps, the generated samples tend to become blurrier as the BNR value increases, leading to poorer FID scores. This highlights the trade-off between blurring and noising, where overly prioritizing the blurring process can negatively impact sample quality.
418
+
419
+ ![](images/bac25c4873fd54cdc0a467148975128bd8abada872edf671fe85d0a7b66ac5d9.jpg)
420
+ BNR=0.1
421
+ FID 1.97 NFE 35
422
+
423
+ ![](images/65322ba6f7f0212ce78ba6ebdd882045f05e3a4ae303f6e7696904549662596a.jpg)
424
+ BNR=0.5
425
+ FID 1.85 NFE 35
426
+
427
+ ![](images/c1b4381c66ac3ab9a882c2e0b8db030a72c8ccf78e5abc57756d9bee3ebcc51f.jpg)
428
+ BNR=2.0
429
+ FID 2.57 NFE 35
430
+
431
+ ![](images/6e9214ff0b3fad70f41b0ae9ed30b678dae347eb541023a9e2920f4eb02ef217.jpg)
432
+ BNR=10.0
433
+ FID 11.97 NFE 35
434
+
435
+ ![](images/5654b7f9faffbe28d119d47ab7ebd6995a557b9282a22ba7feaf48d6af4c9a01.jpg)
436
+ FID 12.97 NFE 35
437
+ Figure 10: Visual results of our re-implemented Blurring Diffusion (Hoogeboom & Salimans, 2024) generated samples with different NFEs. As the number of sampling steps increases, the generated samples show progressively more detailed high-frequency components. Lower NFEs lead to blurrier samples due to insufficient reverse steps as discussed in Sec. 4.3 due to the data manifold shifts.
438
+
439
+ ![](images/817988621db962840c93c5250e93866b4e730080cf1ef7ca880e13da50d24b33.jpg)
440
+ FID 4.13 NFE 79
441
+
442
+ ![](images/3cc7dc31a6e6ec0e543b6a56f99814b6184d084979b5d1cab7eeb4f246dc32c7.jpg)
443
+ FID 2.91 NFE 159
444
+
445
+ ![](images/8087f026eccd8c450dadaaaeb329e57a741d104a20965b51eeb150114c4a043c.jpg)
446
+ FID 2.71 NFE 399
447
+
448
+ ![](images/cde5aebf0f2e6cf7f44a1f9acecca98b7f116bc834203d6232eb411f53344d93.jpg)
449
+ FID 2.68 NFE 999
450
+
451
+ ![](images/7d3615976101057b8c5bb2a3e12cbd75b9791a3bb2782d0855fae74473094a1c.jpg)
452
+ Setting (a)
453
+ FID 13.19 NFE 35
454
+
455
+ ![](images/87f796a596ef9fcab6622e315c6064af4dd50b827db1a7073f740273f9bcc2c5.jpg)
456
+ Setting (b)
457
+ FID 9.09 NFE 35
458
+
459
+ ![](images/07996cc8f814086730121cdd270a3f0362fe4381f1dc36eeb434a67d01f2c5ff.jpg)
460
+ Setting (c)
461
+ FID 1.98 NFE 35
462
+ Figure 11: Visual comparison of different training objectives and parameterization techniques discussed in Tab. 5. Setting (a) and (b) either suffer from inaccurate predictions or amplified high-frequency signals, which lead to poorer generation quality. Although Setting (c) generates better results, it does not take advantage of the BNR adjustment to ease the neural network's learning process. In contrast, Setting (d), which employs a divide-and-conquer strategy, benefits from the BNR adjustment by better leveraging the spectral dependency of images. This leads to improved learning for the neural network and results in a noticeable improvement in sample quality.
463
+
464
+ ![](images/3b4dda804ba0494803376fb65dd611edeabca811fc3556031f43d639eaddbfe1.jpg)
465
+ Setting (d)
466
+ FID 1.85 NFE 35
467
+
468
+ ![](images/f71105c5629a41d2b5e8157d9185d5e35e125a494c51df471fe0ead371e04445.jpg)
469
+ (a) Unconditional CIFAR-10, FID 1.85
470
+
471
+ ![](images/97efd1afce32e8dc19e17b51553144a4b822352fd0c6ebddc556f71e4b7b7d81.jpg)
472
+ (b) Class-conditional CIFAR-10, FID 1.68
473
+
474
+ ![](images/6ddbd7b9b05a823ca44a5b9cd4e1b49bc8d39eff7230d7ee26ee8c48e1bc028e.jpg)
475
+ (c) FFHQ $64\times 64$ FID 2.29
476
+ Figure 12: Uncurated samples from CIFAR-10, FFHQ $64 \times 64$ , and LSUN-church $128 \times 128$ .
477
+
478
+ ![](images/28a2c39619ad1f75594d332366ab5071844ff6a9fd2f57ba2a3b6de2ec45e5f7.jpg)
479
+ (d) LSUN-church $128\times 128$ FID 2.56
2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4d1fced3d370b535da5b06c5740407b0fee5a166a8cb25b5878ace1d2a2d9f4
3
+ size 1242533
2025/Warm Diffusion_ Recipe for Blur-Noise Mixture Diffusion Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/f38081e3-d6cf-4c34-b8dc-864bd3374a1b_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/f38081e3-d6cf-4c34-b8dc-864bd3374a1b_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/f38081e3-d6cf-4c34-b8dc-864bd3374a1b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:554b25a87d44554be385bae860942dc7a018ebf8ceaf4331b362cbd7812653fb
3
+ size 1333659
2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/full.md ADDED
@@ -0,0 +1,701 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WASSERSTEIN-REGULARIZED CONFORMAL PREDICTION UNDER GENERAL DISTRIBUTION SHIFT
2
+
3
+ Rui Xu, Sihong Xie*
4
+
5
+ The Hong Kong University of Science and Technology (Guangzhou)
6
+
7
+ rxu233@connect.hkust-gz.edu.cn, sihongxie@hkust-gz.edu.cn
8
+
9
+ Chao Chen
10
+
11
+ Harbin Institute of Technology
12
+
13
+ cha01nbox@gmail.com
14
+
15
+ Yue Sun, Parvathinathan Venkitasubramaniam
16
+
17
+ Lehigh University
18
+
19
+ yus516@lehigh.edu, pav309@lehigh.edu
20
+
21
+ # ABSTRACT
22
+
23
+ Conformal prediction yields a prediction set with guaranteed $1 - \alpha$ coverage of the true target under the i.i.d. assumption, which may not hold and lead to a gap between $1 - \alpha$ and the actual coverage. Prior studies bound the gap using total variation distance, which cannot identify the gap changes under distribution shift at a given $\alpha$ . Besides, existing methods are mostly limited to covariate shift, while general joint distribution shifts are more common in practice but less researched. In response, we first propose a Wasserstein distance-based upper bound of the coverage gap and analyze the bound using probability measure pushforwards between the shifted joint data and conformal score distributions, enabling a separation of the effect of covariate and concept shifts over the coverage gap. We exploit the separation to design an algorithm based on importance weighting and regularized representation learning (WR-CP) to reduce the Wasserstein bound with a finite-sample error bound. WR-CP achieves a controllable balance between conformal prediction accuracy and efficiency. Experiments on six datasets prove that WR-CP can reduce coverage gaps to $3.2\%$ across different confidence levels and outputs prediction sets $37\%$ smaller than the worst-case approach on average.
24
+
25
+ # 1 INTRODUCTION
26
+
27
+ Because of data noise, unobservable factors, and knowledge gaps, stakeholders must also consider prediction uncertainty in machine learning applications, especially in areas such as fintech (Ryu & Ko, 2020), healthcare (Feng et al., 2021), and autonomous driving (Seoni et al., 2023). Conformal prediction (CP) addresses prediction uncertainty by generating a set of possible targets instead of a single prediction (Vovk et al., 2005; Shafer & Vovk, 2007; Angelopoulos & Bates, 2021). We focus on CP in regression tasks. With a trained model $h$ , CP calculates the difference (conformal score) between the predicted and actual target via a score function $s(x,y) = |h(x) - y|$ over some calibration instances. With the empirical $1 - \alpha$ quantile $\tau$ of the conformal scores, the prediction set $C(x)$ of a test input $x$ contains all targets whose scores are smaller than $\tau$ . If calibration and test data are independent and identically distributed (i.i.d.), the probability that the prediction set $C(x)$ contains the true target $y$ of $x$ is close to $1 - \alpha$ (i.e. the coverage guarantee).
28
+
29
+ Denote $P_{XY}$ and $Q_{XY}$ the calibration and test distributions, respectively, in space $\mathcal{X} \times \mathcal{Y}$ . We assume $y|x \sim N(f_P(x),\varepsilon_P)$ for $(x,y) \sim P_{XY}$ and $y|x \sim N(f_Q(x),\varepsilon_Q)$ for $(x,y) \sim Q_{XY}$ . In practice, the i.i.d. assumption can be violated by a joint distribution shift such that $P_{XY} \neq Q_{XY}$ , due to a covariate shift $(P_X \neq Q_X)$ , a concept shift $(f_P \neq f_Q)$ , or both (Figure 1(a) left) (Kouw & Loog, 2018). With a distribution shift, the coverage guarantee fails, leading to a gap between the probability that $y \in C(x)$ and $1 - \alpha$ . Formally, denoting $P_V$ and $Q_V$ the calibration and test conformal score distributions, respectively, the coverage gap is the difference between the cumulative density functions (CDFs) of $P_V$ and $Q_V$ at quantile $\tau$ (Figure 1(a) left). Prior methods are concerned with
30
+
31
+ ![](images/7489274166df373264c2eab386653c52c67eca4789043de3a882b29f3029fd49.jpg)
32
+ (a)
33
+ (b)
34
+ Choose the Right Metric to Bound Coverage Gap
35
+
36
+ ![](images/585ce37cd1a8cbd896a5a24590cbcc97c634254bc18884184d8e738c252be8cb.jpg)
37
+
38
+ ![](images/e97dd4ea6636fbe76eafc62a6ec6ca016eb2410c3f31fa353be6b7638771b67c.jpg)
39
+ Figure 1: (a) Joint distribution shift can include both covariate shift $(P_{X} \neq Q_{X})$ and concept shift $(f_{P} \neq f_{Q})$ . Coverage gap (Eq. (3)) is the absolute difference in cumulative probabilities of calibration and test conformal scores at the $1 - \alpha$ quantile $\tau$ . We address covariate-shift-induced Wasserstein distance by applying importance weighting (Tibshirani et al., 2019) to calibration samples, and further minimize concept-shift-induced Wasserstein distance to obtain accurate and efficient prediction sets; (b) $Q_{V}^{(1)}$ and $Q_{V}^{(2)}$ are two distinct test conformal score distributions. Wasserstein distance (Eq. (5)) integrates the vertical gap between two cumulative probability distributions overall all quantiles, and is sensitive to coverage gap changes at any quantile. Total variation distance fails to indicate coverage gap changes thoroughly as it is agnostic about where two distributions diverge.
40
+
41
+ the worst-case shifts and passively expand prediction sets as much as possible to meet the coverage guarantee for any shifted test distribution, leading to excessively large and inefficient prediction sets (Gendler et al., 2021; Cauchois et al., 2024; Zou & Liu, 2024; Yan et al., 2024). Recent works assume knowledge about the distribution shifts between test and calibration distribution (Barber et al., 2023; Angelopoulos et al., 2022; Colombo, 2024). The knowledge is further embedded as the total variation (TV) distance between conformal score distributions $P_V$ and $Q_V$ to bound and minimize the coverage gap. However, the TV distance ignores where two conformal score distributions differ, while the coverage gap is defined at a specific $\alpha$ and is location-dependent, making TV distance less indicative of coverage gap during model optimization (Figure 1(b) right).
42
+
43
+ Opposing to TV distance, we adopt Wasserstein distance over the space of probability distributions of conformal score to upper bound the coverage gap under joint distribution shift. Such an upper bound integrates the vertical gap between the CDFs of two conformal score distributions $P_V$ and $Q_V$ and measures the gap at any $\alpha$ (Figure 1(b) left), indicating coverage gap at a given $\alpha$ for distribution discrepancy minimization and coverage guarantee (Section 3.1, Appendix B). Targeting more effective algorithms specifically for covariate and concept shifts that constitute joint distribution shift, we further penetrate the complex landscape of joint distributional shift. We disentangle the complex dependencies between the Wasserstein upper bound and covariate and concept shifts using a novel pushforwards of probability measure, decomposing the bound into two Wasserstein terms so that the effects of covariate and concept shifts on the coverage gap are independent (Eq. (7)). Theoretical analyses crystalize the link between CP coverage gap, the smoothness of the conformal residue and predictive model, and the amount of covariate and concept shifts (Section 4.1). The decomposition allows representation learning using importance weighting (Tibshirani et al., 2019) that reduces the covariate-shift-induced term, and minimization of the concept-shift-induced term (Figure 1(a)) with finite samples with an empirical error bound (Section 4.2). We proved the effectiveness of the resulting algorithm, Wasserstein-regularized conformal prediction (WR-CP), for multi-source domain generalization where the test distribution is an unknown mixture of training distributions. On six datasets from applications including AI4S (Brooks & Marcolini, 2014), smart transportation (Cui et al., 2019; Guo et al., 2019), and epidemic spread forecasting (Deng et al., 2020), experiments across various $\alpha$ values (0.1 to 0.9) demonstrate that coverage gaps are reduced to $3.2\%$ and the prediction set sizes are $37\%$ smaller than those generated by the worst-case approach on average. Besides, WR-CP allows a smooth balance between prediction coverage and efficiency (Figure 5).
44
+
45
+ # 2 BACKGROUND AND RELATED WORKS
46
+
47
+ # 2.1 CONFORMAL PREDICTION
48
+
49
+ Let $X \in \mathcal{X} \subseteq \mathbb{R}^d$ and $Y \in \mathcal{Y} \subseteq \mathbb{R}$ denote the input and output random variable, respectively. A hypothesis $h: \mathcal{X} \to \mathcal{Y}$ is a model trained to predict target $Y$ from feature $X$ . We observe $n$ instances $(X_1, Y_1), \ldots, (X_n, Y_n)$ from calibration distribution $P_{XY}$ . Taking $(x, y)$ as a realization of $(X, Y)$ , a score function $s(x, y): \mathcal{X} \times \mathcal{Y} \to \mathcal{V} \subseteq \mathbb{R}$ quantifies how $(x, y)$ conforms to the model $h$ . For regression tasks, typically $s(x, y) = |h(x) - y|$ . Split conformal prediction is widely-used, and defines calibration conformal scores $V_i = s(X_i, Y_i)$ for $i = 1, \ldots, n$ (Papadopoulos et al., 2002). Letting $\tau \in \mathcal{V}$ be the $\lceil (1 - \alpha)(n + 1) \rceil / n$ quantile of $V_1, \ldots, V_n$ , the prediction set of input $X_{n+1}$ is
50
+
51
+ $$
52
+ C \left(X _ {n + 1}\right) = \left\{\hat {y}: s \left(X _ {n + 1}, \hat {y}\right) \leq \tau , \hat {y} \in \mathcal {Y} \right\}. \tag {1}
53
+ $$
54
+
55
+ Consider the instance $(X_{n+1}, Y_{n+1})$ following a test distribution $Q_{XY}$ . If the test and calibration instances are i.i.d. (i.e. $P_{XY} = Q_{XY}$ ), the probability that the true target $Y_{n+1}$ is included in $C(X_{n+1})$ is at least $1 - \alpha$ . If calibration conformal scores are almost surely distinct, we can also bound the probability from above by $1 - \alpha + \frac{1}{n+1}$ (Angelopoulos & Bates, 2021). The bounded probability is called coverage guarantee:
56
+
57
+ $$
58
+ \Pr \left(Y _ {n + 1} \in C \left(X _ {n + 1}\right)\right) \in [ 1 - \alpha , 1 - \alpha + 1 / (n + 1)). \tag {2}
59
+ $$
60
+
61
+ Vovk et al. (2005) proved that the assumption of i.i.d instances can be relaxed to exchangeability of calibration and test instances. With exchangeability, prior CP methods proposed to improve the adaptiveness of prediction set to different test inputs (Romano et al., 2019; 2020; Guan, 2023; Amoukou & Brunel, 2023; Han et al., 2023) and maintain conditional coverage guarantee for subpopulations of the test distribution (Gibbs et al., 2023; Jung et al., 2022; Feldman et al., 2021; Cauchois et al., 2021; Foygel Barber et al., 2021; Stutz et al., 2021; Einbinder et al., 2022b). However, when the assumption is violated so that $P_{XY} \neq Q_{XY}$ , coverage guarantee may not hold.
62
+
63
+ # 2.2 CONFORMAL PREDICTION UNDER DISTRIBUTION SHIFTS
64
+
65
+ Covariate shift $(P_X \neq Q_X)$ : Tibshirani et al. (2019) adopted importance weighting by likelihood ratio between $P_X$ and $Q_X$ to satisfy the i.i.d assumption, so coverage is ensured under covariate shift. Concept shift $(f_P \neq f_Q)$ : Einbinder et al. (2022a); Sesia et al. (2023) addressed CP under concept shift which is represented by label noise.
66
+
67
+ Joint distribution shift $(P_{XY} \neq Q_{XY})$ consists of covariate shift $(P_X \neq Q_X)$ and/or concept shift $(f_P \neq f_Q)$ (Kouw & Loog, 2018). Barber et al. (2023), Angelopoulos et al. (2022), and Angelopoulos & Bates (2021) upper-bound coverage gap via total variation distance, but TV distance cannot identify gap changes at a fixed $\alpha$ . To reduce the gap, Gibbs & Candes (2021), Xu & Xie (2021), and Gibbs & Candes (2024) focus on CP under dynamic shift (test distribution changes over time). Meanwhile, some works concentrate on static shift (test distribution unchanged). These works can be categorized into two pipelines. The first pipeline modifies vanilla CP upon a residual-driven model for robust coverage (Gendler et al., 2021; Cauchois et al., 2024; Zou & Liu, 2024). The second pipeline incorporates a conformal-based loss during training to obtain robust and efficient prediction sets (Yan et al., 2024). However, these works treat a joint distribution shift as a whole and adopt a worst-case principle for prediction.
68
+
69
+ In this work, we explore CP under multi-source domain generalization, which focuses on developing a model that generalizes effectively to unseen test distributions by leveraging the data from multiple source distributions (Sagawa et al., 2019; Krueger et al., 2021). A related, yet distinct area is federated CP, which aims to train a model across decentralized data sources to perform well on a known test distribution (typically a uniformly weighted mixture of source distributions) without requiring centralization to ensure privacy. Regarding federated CP, FCP (Lu et al., 2023) and FedCP-QQ (Humbert et al., 2023) aim for a coverage guarantee when the test and calibration samples are exchangeable from the same mixture. When exchangeability does not hold, DP-FedCP (Plassier et al., 2023) addresses scenarios where test samples are drawn from a single source distribution, assuming that only label shifts $(P_{Y} \neq Q_{Y})$ occur among the source distributions. Besides, CP with missing outcomes is studied by Liu et al. (2024) where the samples from the test distribution are accessible. The proposed WR-CP does not consider privacy but works on a more generalized setup: the test samples are drawn from an unknown random mixture where both concept and covariate shifts can occur among the source domains.
70
+
71
+ # 3 METHOD
72
+
73
+ # 3.1 UPPER-BOUNDING COVERAGE GAP BY WASSERSTEIN DISTANCE
74
+
75
+ As shown in Figure 1(b), Wasserstein distance can effectively indicate changes in coverage gap across different values of $\alpha$ . We formally upper-bound coverage gap via Wasserstein distance. Let $V \in \mathcal{V} \subseteq \mathbb{R}$ be the random variable of conformal score. $P_V$ and $Q_V$ are calibration and test conformal score distributions, respectively. The guarantee in Eq. (2) indicates that $\operatorname*{Pr}(s(X_{n+1}, Y_{n+1}) \leq \tau) \in [1 - \alpha, 1 - \alpha + 1/(n+1))$ . $F_{P_V}$ and $F_{Q_V}$ are CDFs of $P_V$ and $Q_V$ , respectively. Under the i.i.d. assumption, $P_V = Q_V$ , and thus $F_{Q_V}(\tau) = F_{P_V}(\tau) \in [1 - \alpha, 1 - \alpha + 1/(n+1))$ . However, the assumption can be violated by a joint distribution shift, which may results in $P_V \neq Q_V$ . In this case, $F_{P_V}(\tau)$ is still bounded, but $F_{Q_V}(\tau) \neq F_{P_V}(\tau)$ . Inadequate coverage renders prediction sets unreliable, while excessive coverage leads to large prediction sets, reducing prediction efficiency, and we define coverage gap as the absolute difference<sup>1</sup>:
76
+
77
+ $$
78
+ \text {C o v e r a g e} = \left| F _ {P _ {V}} (\tau) - F _ {Q _ {V}} (\tau) \right|. \tag {3}
79
+ $$
80
+
81
+ Definition 1 (Kolmogorov Distance). (Gaunt & Li, 2023) $F_{\mu}$ and $F_{\nu}$ are the CDFs of probability measures $\mu$ and $\nu$ on $\mathbb{R}$ , respectively. Kolmogorov distance between $\mu$ and $\nu$ is given by $K(\mu, \nu) = \sup_{x \in \mathbb{R}} |F_{\mu}(x) - F_{\nu}(x)|$ .
82
+
83
+ With Definition 1, as $\tau \in \mathcal{V} \subseteq \mathbb{R}$ , Eq. (3) is bounded by $K(P_V, Q_V)$ :
84
+
85
+ $$
86
+ \text {C o v e r a g e} = \left| F _ {P _ {V}} (\tau) - F _ {Q _ {V}} (\tau) \right| \leq \sup _ {v \in \mathcal {V}} \left| F _ {P _ {V}} (v) - F _ {Q _ {V}} (v) \right| = K \left(P _ {V}, Q _ {V}\right). \tag {4}
87
+ $$
88
+
89
+ Definition 2 (p-Wasserstein Distance). (Panaretos & Zemel, 2019) Given two probability measures $\mu$ and $\nu$ on a metric space $(\mathcal{X}, c_{\mathcal{X}})$ , where $\mathcal{X}$ is a set and $c_{\mathcal{X}}$ is a metric on $\mathcal{X}$ , the Wasserstein distance of order $p \geq 1$ between $\mu$ and $\nu$ is
90
+
91
+ $$
92
+ W _ {p} (\mu , \nu) = \inf _ {\gamma \in \Gamma (\mu , \nu)} \left(\int_ {\mathcal {X} \times \mathcal {X}} c _ {\mathcal {X}} (x _ {1}, x _ {2}) ^ {p} \mathrm {d} \gamma (x _ {1}, x _ {2})\right) ^ {1 / p}, \tag {5}
93
+ $$
94
+
95
+ where $\Gamma (\mu ,\nu)$ is the set of all joint probability measures $\gamma$ on $\mathcal{X}\times \mathcal{X}$ with marginals $\gamma (\mathcal{A}\times \mathcal{X}) = \mu (\mathcal{A})$ and $\gamma (\mathcal{X}\times \mathcal{B}) = \nu (\mathcal{B})$ for all measurable sets $\mathcal{A},\mathcal{B}\subseteq \mathcal{X}$ .
96
+
97
+ Proposition 1. (Ross, 2011) If a probability measure $\mu$ in space $\mathbb{R}$ has Lebesgue density bounded by $L$ , then for any probability measure $\nu$ , $K(\mu, \nu) \leq \sqrt{2LW_1(\mu, \nu)}$ .
98
+
99
+ In this work, let $W$ denote Wasserstein distance with $p = 1$ . Applying Eq. (4) and Proposition 1 with $L$ as the Lebesgue density bound of $P_V$ , we can develop an upper bound by
100
+
101
+ $$
102
+ \text {C o v e r a g e} \quad \mathrm {g a p} \leq \sqrt {2 L W \left(P _ {V} , Q _ {V}\right)}. \tag {6}
103
+ $$
104
+
105
+ # 3.2 WASSERSTEIN DISTANCE DECOMPOSITION AND MINIMIZATION
106
+
107
+ In Eq. (6), we show that the Wasserstein distance $W(P_V, Q_V)$ can effectively bound the coverage gap caused by a joint distribution shift. However, it is still not clear how the two components of joint distribution shift, namely, covariate shift in space $\mathcal{X}$ and concept shift in space $\mathcal{Y}$ lead to $W(P_V, Q_V)$ in space $\mathcal{V}$ . Besides, we want the quantified contributions amenable to optimization techniques to reduce $W(P_V, Q_V)$ . To the best of our knowledge, there is no prior work that suits this need. Therefore, we propose to upper-bound $W(P_V, Q_V)$ with two discrepancy terms due to covariate and concept shifts, and corresponding optimization methods to reduce $W(P_V, Q_V)$ via minimizing the two terms.
108
+
109
+ Definition 3 (Pushforward Measure). If $\mathcal{X}$ and $\mathcal{Y}$ are separate measurable spaces, $\mu$ is a probability measure on $\mathcal{X}$ , and $f: \mathcal{X} \to \mathcal{Y}$ is a measurable function, define the pushforward $f_{\#} \mu$ of $\mu$ through $f$ such that $f_{\#} \mu(\mathcal{A}) = \mu(f^{-1}(\mathcal{A}))$ for all measurable set $\mathcal{A} \subseteq \mathcal{Y}$ .
110
+
111
+ With Definition 3, we have $P_{Y} = f_{P\#}P_{X}$ and $Q_{Y} = f_{Q\#}Q_{X}$ . Besides, we define $s_P(x) = s(x,f_P(x)) = |h(x) - f_P(x)|$ for $x \sim P_X$ , and $s_Q(x) = s(x,f_Q(x)) = |h(x) - f_Q(x)|$ for $x \sim Q_X$ , leading to pushforwards of the conformal score $P_V = s_{P\#}P_X$ and $Q_V = s_{Q\#}Q_X$ .
112
+
113
+ To upper-bound $W(P_V, Q_V)$ about conformal scores according to covariate (concept, resp.) shifts in the $\mathcal{X}$ ( $\mathcal{Y}$ , resp.) space, we introduce a pushforward $Q_{V,s_P} = s_{P\#}Q_X$ on $\mathcal{V}$ . Since $P_V$ and $Q_{V,s_P}$ are pushforward measures by the same function $s_P$ from $P_X$ and $Q_X$ , respectively, $W(P_V,Q_{V,s_P})$ is a measure of covariate shift ( $P_X \neq Q_X$ ). Also, as $Q_{V,s_P}$ and $Q_V$ are pushforward measures from the same source $Q_X$ by $s_P$ and $s_Q$ , respectively, $W(Q_{V,s_P},Q_V)$ can indicate the extent of concept shift ( $f_P \neq f_Q$ , and thus $s_P \neq s_Q$ ). The relationships among the pushforward measures are shown in Figure 2. As Panaretos & Zemel (2019) states, the triangle inequality holds that
114
+
115
+ $$
116
+ W \left(P _ {V}, Q _ {V}\right) \leq W \left(P _ {V}, Q _ {V, s _ {P}}\right) + W \left(Q _ {V, s _ {P}}, Q _ {V}\right). \tag {7}
117
+ $$
118
+
119
+ With Eq. (7) bounding $W(P_V, Q_V)$ , we design an approach to minimize the upper bound. First, we adopt importance weighting, which weights calibration conformal scores with the likelihood ratio $\mathrm{d}Q_X(x) / \mathrm{d}P_X(x)$ . Tibshirani et al. (2019) prove that importance weighting can preserve the coverage guarantee when only a covariate shift occurs. However, existing works do not include the weighting technique when dealing with a joint distribution shift. We prove that importance weighting can minimize covariate-shift-induced Wasserstein distance, $W(P_V, Q_{V,s_P})$ , even if a concept shift coincides. Given any measurable set $\mathcal{A} \subseteq \mathcal{X}$ , $\mathcal{B} := \{s_P(x) : x \in \mathcal{A}\} \subseteq \mathcal{V}$ . With Definition 3,
120
+
121
+ ![](images/2e57a344ef8e8d3341ac3aead00c9d61dd1dbb64fce209be382460b87a78ed89.jpg)
122
+ Figure 2: Pushforward measures.
123
+
124
+ $$
125
+ \begin{array}{l} P _ {V} (\mathcal {B}) = \int_ {\mathcal {B}} \mathrm {d} P _ {V} (v) = \int_ {\mathcal {B}} \mathrm {d} \left(s _ {P \#} P _ {X}\right) (v) = \int_ {\mathcal {A}} \mathrm {d} P _ {X} (x) \quad \overrightarrow {\text {w e i g h t i n g}} \\ = \int_ {\mathcal {A}} \frac {\mathrm {d} Q _ {X} (x)}{\mathrm {d} P _ {X} (x)} \mathrm {d} P _ {X} (x) = \int_ {\mathcal {A}} \mathrm {d} Q _ {X} (x) = \int_ {\mathcal {B}} \mathrm {d} (s _ {P \#} Q _ {X}) (v) = \int_ {\mathcal {B}} \mathrm {d} Q _ {V, s _ {P}} (v) = Q _ {V, s _ {P}} (\mathcal {B}). \\ \end{array}
126
+ $$
127
+
128
+ Since importance weighting can transform $P_V$ to $Q_{V,s_P}$ , $W(P_V, Q_{V,s_P})$ is minimized, and the remaining term in the upper bound in Eq. (7) is the concept-shift-induced component $W(Q_{V,s_P}, Q_V)$ . Next, we further minimize it during training, as illustrated in Figure 1. The reasoning behind distinguishing between covariate and concept shifts is elaborated in Appendix C.
129
+
130
+ # 4 THEORY
131
+
132
+ # 4.1 UPPER-BOUNDING WASSERSTEIN DISTANCE BY COVARIATE AND CONCEPT SHIFTS
133
+
134
+ Although Eq. (7) upper bounds for $W(P_{V}, Q_{V})$ by $W(P_{V}, Q_{V, s_{P}})$ and $W(Q_{V, s_{P}}, Q_{V})$ , it remains unclear how these shifts lead to these terms. Covariate shift can be more accurately quantified by $W(P_{X}, Q_{X})$ . Also, with $Q_{Y, f_{P}} = f_{P \#} Q_{X}$ , $W(Q_{Y, f_{P}}, Q_{Y})$ is a more direct way to measure concept shift by comparing $f_{P}$ and $f_{Q}$ based on $Q_{X}$ . Therefore, we further upper-bound the two terms on the right-hand side of Eq. (7) using $W(P_{X}, Q_{X})$ and $W(Q_{Y, f_{P}}, Q_{Y})$ . We extend a theorem in Aolaritei et al. (2022), which pushes two probability measures with the same function, while Theorem 1 considers pushing with different functions.
135
+
136
+ Theorem 1. For probability measures $\mu$ and $\nu$ on metric space $(\mathcal{X}, c_{\mathcal{X}})$ , letting $f, g: \mathcal{X} \to \mathcal{Y}$ be measurable functions, $\mu_f$ and $\nu_g$ on metric space $(\mathcal{Y}, c_{\mathcal{Y}})$ are pushforwards of $\mu$ and $\nu$ under functions $f$ and $g$ , respectively. The Wasserstein distance between $\mu_f$ and $\nu_g$ holds the equivalence:
137
+
138
+ $$
139
+ W(\mu_{f},\nu_{g}) = \inf_{\gamma^{\prime}\in \Gamma (\mu_{f},\nu_{g})}\int \limits_{\mathcal{Y}\times \mathcal{Y}}c_{\mathcal{Y}}(y_{1},y_{2}) \mathrm{d}\gamma^{\prime}(y_{1},y_{2}) = \inf_{\gamma \in \Gamma (\mu ,\nu)}\int \limits_{\mathcal{X}\times \mathcal{X}}c_{\mathcal{Y}}(f(x_{1}),g(x_{2})) \mathrm{d}\gamma (x_{1},x_{2}).
140
+ $$
141
+
142
+ As $\mathcal{V},\mathcal{Y}\subseteq \mathbb{R},c_{\mathcal{V}}(x_1,x_2) = c_{\mathcal{V}}(x_1,x_2) = |x_1 - x_2|$ , Theorem 1 leads to the following
143
+
144
+ $$
145
+ W \left(Q _ {V, s _ {P}}, Q _ {V}\right) = \inf _ {\gamma \in \Gamma \left(Q _ {X}, Q _ {X}\right)} \int_ {\mathcal {X} \times \mathcal {X}} \left| s _ {P} \left(x _ {1}\right) - s _ {Q} \left(x _ {2}\right) \right| d \gamma \left(x _ {1}, x _ {2}\right), \tag {8}
146
+ $$
147
+
148
+ $$
149
+ W \left(Q _ {Y, f _ {P}}, Q _ {Y}\right) = \inf _ {\gamma \in \Gamma \left(Q _ {X}, Q _ {X}\right)} \int_ {\mathcal {X} \times \mathcal {X}} \left| f _ {P} \left(x _ {1}\right) - f _ {Q} \left(x _ {2}\right) \right| d \gamma \left(x _ {1}, x _ {2}\right). \tag {9}
150
+ $$
151
+
152
+ Let $\gamma^{*}$ be the optimal transport plan of $W(Q_{Y,f_P},Q_Y)$ . With $\eta = \max_{x_1,x_2\in \mathcal{X}}\frac{|s_P(x_1) - s_Q(x_2)|}{|f_P(x_1) - f_Q(x_2)|}$ , we have
153
+
154
+ $$
155
+ \begin{array}{l} W \left(Q _ {V, s _ {P}}, Q _ {V}\right) \leq \int_ {\mathcal {X} \times \mathcal {X}} \left| s _ {P} \left(x _ {1}\right) - s _ {Q} \left(x _ {2}\right) \right| d \gamma^ {*} \left(x _ {1}, x _ {2}\right) \tag {10} \\ \leq \int_ {\mathcal {X} \times \mathcal {X}} \eta | f _ {P} (x _ {1}) - f _ {Q} (x _ {2}) | \mathrm {d} \gamma^ {*} (x _ {1}, x _ {2}) = \eta W (Q _ {Y, f _ {P}}, Q _ {Y}). \\ \end{array}
156
+ $$
157
+
158
+ In Eq. (10), the first inequality holds as $\gamma^{*}$ may not be the optimal transport plan of $W(Q_{V,s_P},Q_V)$ and the second inequality follows the definition of $\eta$ . Appendix D shows a geometric intuition of $\eta$ .
159
+
160
+ Theorem 2. For probability measures $\mu$ and $\nu$ on metric space $(\mathcal{X}, c_{\mathcal{X}})$ with a measurable function $f: \mathcal{X} \to \mathcal{Y}$ , $\mu_f$ and $\nu_f$ on metric space $(\mathcal{Y}, c_{\mathcal{Y}})$ are the pushforward of $\mu$ and $\nu$ through function $f$ , respectively. If $f$ has Lipschitz continuity constant $\kappa$ , i.e., $\frac{c_{\mathcal{Y}}(f(x_1), f(x_2))}{c_{\mathcal{X}}(x_1, x_2)} \leq \kappa, \forall x_1, x_2 \in \mathcal{X}$ ,
161
+
162
+ $$
163
+ W \left(\mu_ {f}, \nu_ {f}\right) \leq \kappa W (\mu , \nu). \tag {11}
164
+ $$
165
+
166
+ As $\mathcal{X} \subseteq \mathbb{R}^d$ , $c_{\mathcal{X}}(x_1, x_2) = \|x_1 - x_2\|_2$ . $\kappa$ is the Lipschitz constant of $s_P: \mathcal{X} \to \mathcal{V}$ such that $\frac{|s_P(x_1) - s_P(x_2)|}{\|x_1 - x_2\|_2} \leq \kappa, \forall x_1, x_2 \in \mathcal{X}$ . With Theorem 2, as $P_V$ and $Q_{V,s_P}$ are pushforwards of $P_X$ and $Q_X$ through $s_P$ , we have
167
+
168
+ $$
169
+ W \left(P _ {V}, Q _ {V, s _ {P}}\right) \leq \kappa W \left(P _ {X}, Q _ {X}\right). \tag {12}
170
+ $$
171
+
172
+ Plugging Eq. (10) and Eq. (12) into Eq. (7), $W(P_{V},Q_{V}) \leq \kappa W(P_{X},Q_{X}) + \eta W(Q_{Y,f_{P}},Q_{Y})$ . Therefore, by utilizing Eq. (6), we can further bound the coverage gap using the magnitudes of covariate and concept shifts:
173
+
174
+ $$
175
+ \text {C o v e r a g e} \quad \mathrm {g a p} \leq \sqrt {2 L (\kappa W (P _ {X} , Q _ {X}) + \eta W (Q _ {Y , f _ {P}} , Q _ {Y}))}. \tag {13}
176
+ $$
177
+
178
+ Equation (13) highlights how covariate and concept shifts impact the coverage gap. While the values of $W(P_{X},Q_{X})$ and $W(Q_{Y,f_P},Q_Y)$ are inherent properties of given data and cannot be altered, the parameters $\kappa$ and $\eta$ are linked to the model $h$ , allowing minimizing $\kappa$ and $\eta$ via optimizing $h$ .
179
+
180
+ # 4.2 EMPIRICAL UPPER BOUND OF COVERAGE GAP
181
+
182
+ In practice, $P_V$ and $Q_V$ are rarely available. Sometimes we may have access to their empirical distributions via the score function $s$ , where $\hat{P}_V$ is derived from $n$ calibration samples and $\hat{Q}_V$ is obtained from $m$ test samples. Having the Wasserstein distance between the two empirical distributions $W(\hat{P}_V, \hat{Q}_V)$ , we derive the error bound between the empirical form and $W(P_V, Q_V)$ by asymptotic estimation.
183
+
184
+ Definition 4 (Upper Wasserstein Dimension). (Dudley, 1969) Given a set $\mathcal{A} \subseteq \mathcal{X}$ , the $\epsilon$ -covering number, denoted $\mathcal{N}_{\epsilon}(\mathcal{A})$ , is the minimum $b$ such that $b$ closed balls, $\mathcal{B}_1, \ldots, \mathcal{B}_b$ , of diameter $\epsilon$ achieve $\mathcal{A} \subseteq \cup_{1 \leq i \leq b} \mathcal{B}_i$ . For a distribution $\mu$ in $\mathcal{X}$ , the $(\epsilon, \zeta)$ -dimension is $d_{\epsilon}(\mu, \zeta) = -\log (\inf \{\mathcal{N}_{\epsilon}(\mathcal{A}) : \mu(\mathcal{A}) \geq 1 - \zeta\}) / \log \epsilon$ . The upper Wasserstein dimension with $p = 1$ is
185
+
186
+ $$
187
+ d _ {W} (\mu) = \inf \left\{\varphi \in (2, \infty): \lim \sup _ {\epsilon \rightarrow 0} d _ {\epsilon} \left(\mu , \epsilon^ {\frac {\varphi}{\varphi - 2}}\right) \leq \varphi \right\}. \tag {14}
188
+ $$
189
+
190
+ With the definition of upper Wasserstein dimension, Weed & Bach (2019) conducted how an empirical distribution converges to its population by the Wasserstein distance between them.
191
+
192
+ Proposition 2. (Weed & Bach, 2019) Given a probability measure $\mu$ , $\sigma > d_W(\mu)$ . If $\hat{\mu}_n$ is an empirical measure corresponding to $n$ i.i.d. samples from $\mu$ , $\exists \lambda \in \mathbb{R}$ such that $\mathbb{E}[W(\mu, \hat{\mu}_n)] \leq \lambda n^{-1/\sigma}$ . Furthermore, for $t > 0$ , $\operatorname*{Pr}(W(\mu, \hat{\mu}_n) \geq \mathbb{E}[W(\mu, \hat{\mu}_n)] + t) \leq e^{-2nt^2}$ .
193
+
194
+ Theorem 3. Given two probability measures $\mu$ and $\nu$ , $\sigma_{\mu} > d_W(\mu)$ and $\sigma_{\nu} > d_W(\nu)$ . $\hat{\mu}_n$ and $\hat{\nu}_m$ are empirical measures corresponding to $n$ i.i.d. samples from $\mu$ and $m$ i.i.d. samples from $\nu$ , respectively. For $t_\mu, t_\nu > 0$ , $\exists \lambda_\mu, \lambda_\nu \in \mathbb{R}$ with probability at least $(1 - e^{-2nt_\mu^2})(1 - e^{-2mt_\nu^2})$ that
195
+
196
+ $$
197
+ W (\mu , \nu) \leq W (\hat {\mu} _ {n}, \hat {\nu} _ {m}) + \lambda_ {\mu} n ^ {- 1 / \sigma_ {\mu}} + \lambda_ {\nu} m ^ {- 1 / \sigma_ {\nu}} + t _ {\mu} + t _ {\nu}. \tag {15}
198
+ $$
199
+
200
+ Applying Theorem 3 to Eq. (6), we derive an empirical upper bound of coverage gap. Specifically, if $P_V$ has Lebesgue density bounded by $L$ , for $t_P, t_Q > 0$ , $\sigma_P > d_W(P_V)$ , and $\sigma_Q > d_W(Q_V)$ , $\exists \lambda_P, \lambda_Q \in \mathbb{R}$ with probability at least $(1 - e^{-2nt_P^2})(1 - e^{-2mt_Q^2})$ that
201
+
202
+ $$
203
+ \text {C o v e r a g e} \quad \mathrm {g a p} \leq \sqrt {2 L \left(W \left(\hat {P} _ {V} , \hat {Q} _ {V}\right) + \lambda_ {P} n ^ {- 1 / \sigma_ {P}} + \lambda_ {Q} m ^ {- 1 / \sigma_ {Q}} + t _ {P} + t _ {Q}\right)}. \tag {16}
204
+ $$
205
+
206
+ # 5 APPLICATION TO MULTI-SOURCE CONFORMAL PREDICTION
207
+
208
+ In this work, we consider the test distribution to be an unknown random mixture of multiple training distributions, referred to as multi-source domain generalization (Sagawa et al., 2019). As highlighted by Cauchois et al. (2024), achieving $1 - \alpha$ coverage for each of the training distributions ensures that the coverage on test data remains at $1 - \alpha$ if the test distribution is any mixture of the training distributions. We apply the methodology outlined in Section 3 to this scenario, namely multi-source conformal prediction. Given training distributions $D_{XY}^{(i)}$ for $i = 1,\dots,k$ , we require $Q_{XY}$ follows
209
+
210
+ $$
211
+ Q _ {X Y} \in \left\{\sum_ {i = 1} ^ {k} w _ {i} D _ {X Y} ^ {(i)}: w _ {1}, \dots , w _ {k} \geq 0, \sum_ {i = 1} ^ {k} w _ {i} = 1 \right\}. \tag {17}
212
+ $$
213
+
214
+ In other words, $Q_{XY}$ is an unknown random mixture of $D_{XY}^{(i)}$ for $i = 1,..,k$ . Next, we introduce a surrogate of $W(Q_{V,s_P},Q_V)$ , allowing the minimization of $W(Q_{V,s_P},Q_V)$ even when the test distribution $Q_{XY}$ is unknown in practice. With the score function $s(x,y)$ and $D_V^{(i)} = s_\# D_{XY}^{(i)}$ ,
215
+
216
+ $$
217
+ Q _ {V} = s _ {\#} Q _ {X Y} = s _ {\#} \sum_ {i = 1} ^ {k} w _ {i} D _ {X Y} ^ {(i)} = \sum_ {i = 1} ^ {k} w _ {i} s _ {\#} D _ {X Y} ^ {(i)} = \sum_ {i = 1} ^ {k} w _ {i} D _ {V} ^ {(i)}. \tag {18}
218
+ $$
219
+
220
+ By marginalizing out $Y$ in Eq. (17), we obtain $Q_{X} = \sum_{i=1}^{k} w_{i} D_{X}^{(i)}$ . Similar to Eq. (18), with score function $s_{P}(x)$ and $D_{V,s_{P}}^{(i)} = s_{P\#} D_{X}^{(i)}$ , $Q_{V,s_{P}} = s_{P\#} Q_{X} = \sum_{i=1}^{k} w_{i} D_{V,s_{P}}^{(i)}$ .
221
+
222
+ Theorem 4. In space $\mathcal{X} \subseteq \mathbb{R}$ , $\nu$ is a mixture distribution of multiple distributions $\nu^{(i)}$ , $i = 1, \dots, k$ , such that $\nu = \sum_{i=1}^{k} w_i \nu^{(i)}$ with $w_1, \dots, w_k \geq 0$ , $\sum_{i=1}^{k} w_i = 1$ . For any distribution $\mu$ on $\mathcal{X}$ , Wasserstein distance has the inequality that $W(\mu, \nu) \leq \sum_{i=1}^{k} w_i W(\mu, \nu^{(i)})$ .
223
+
224
+ By Theorem 4, $W(Q_{V,s_P},Q_V) \leq \sum_{i = 1}^k w_iW(Q_{V,s_P},D_V^{(i)}) \leq \sum_{i = 1}^k w_i\sum_{i = 1}^k w_iW(D_{V,s_P}^{(i)},D_V^{(i)})$ . The inequality offers a surrogate of $W(Q_{V,s_P},Q_V)$ . Even if $Q_{XY}$ is unknown, with uniformly distributed weights, we minimize the expectation of the surrogate with $w_{i} = 1 / k$ for $i = 1,\dots,k$ : $\min \frac{1}{k}\sum_{i = 1}^{k}W(D_{V,s_P}^{(i)},D_V^{(i)})$ . Besides reducing the coverage gap, we also want smaller prediction errors, so we include empirical risk minimization (ERM) (Vapnik, 1991) during training. Hence, with a loss function $l$ and a parameterized model $h_\theta$ , we merge the constant $1 / k$ with a hyperparameter $\beta$ and introduce the objective function
225
+
226
+ $$
227
+ \min _ {\theta} \sum_ {i = 1} ^ {k} \mathbb {E} _ {(x, y) \sim D _ {X Y} ^ {(i)}} [ l (h _ {\theta} (x), y) ] + \beta \sum_ {i = 1} ^ {k} W \left(D _ {V, s _ {P}} ^ {(i)}, D _ {V} ^ {(i)}\right). \tag {19}
228
+ $$
229
+
230
+ We design Wasserstein-regularized Conformal Prediction (WR-CP) to optimize $h_{\theta}$ by Eq. (19) with finite samples and generate prediction sets with small coverage gaps. $S_{XY}^{(i)}$ is the sample set drawn from $D_{XY}^{(i)}$ for $i = 1, \dots, k$ , and $S_{XY}^{P}$ is the sample set drawn from $P_{XY}$ . $S_{XY}^{Q}$ is a test set containing samples from an unknown distribution $Q_{XY}$ . Algorithm 1 shows the implementation of WR-CP. Kernel density estimation (KDE) is applied to obtain $\hat{P}_X$ , $\hat{D}_X^{(i)}$ , and $\hat{Q}_X$ for the calculation of likelihood ratios, whereas $\hat{D}_V^{(i)}$ and $\hat{D}_{V,s_P}^{(i)}$ are estimated as discontinuous, point-wise distributions to ensure differentiability during training. We show the details of distribution estimation in Appendix E. As Algorithm 1 indicates, in the prediction phase, WR-CP follows the inference procedure of importance-weighted conformal prediction (IW-CP) proposed by Tibshirani et al. (2019). When $\beta = 0$ , Eq. (19) returns to empirical risk minimization, and thus WR-CP becomes IW-CP.
231
+
232
+ # 6 EXPERIMENTS
233
+
234
+ # 6.1 DATASETS AND MODELS
235
+
236
+ Experiments were conducted on six datasets: (a) the airfoil self-noise dataset (Brooks & Marcolini, 2014); (b) Seattle-loop (Cui et al., 2019), PeMSD4, PeMSD8 (Guo et al., 2019) for traffic speed prediction; (c) Japan-Prefectures, and U.S.-States (Deng et al., 2020) for epidemic spread forecasting. $k = 3$ for the airfoil self-noise dataset, and $k = 10$ for the other five datasets. We conducted 10 sampling trials for each dataset. Within each trails, we sampled $S_{XY}^{(i)}$ from each subset $i$ , for
237
+
238
+ Algorithm 1 Wasserstein-regularized Conformal Prediction (WR-CP)
239
+ Require: training set $S_{XY}^{(i)}$ from distribution $D_{XY}^{(i)}$ for $i = 1,\dots,k$ ; calibration set $S_{XY}^{P}$ from $P_{XY};N$ training epochs; model $h_\theta$ ; score function $s(x,y) = |h_{\theta}(x) - y|$ ; loss function $l$ ; balancing hyperparameter $\beta$
240
+ Training Phase:
241
+ 1: Obtain $\hat{P}_X$ and $\hat{D}_X^{(i)}$ for $i = 1,\ldots ,k$ by kernel density estimation;
242
+ 2: for $j = 1$ to $N$ do
243
+ 3: $S_V^P = \{s(x,y):(x,y)\in S_{XY}^P\}$ .
244
+ 4: for $i = 1$ to $k$ do
245
+ 5: Obtain $\hat{D}_V^{(i)}$ from $S_V^{(i)}\coloneqq \{s(x,y):(x,y)\in S_{XY}^{(i)}\}$ by point-wise distribution estimation;
246
+ 6: Weight all $v\in S_V^P$ with normalized $\frac{\mathrm{d}\hat{D}_X^{(i)}(x)}{\mathrm{d}\hat{P}_X(x)}$ , where $x$ is the feature that $(x,y)\in S_{XY}^{P},s(x,y) = v$
247
+ 7: Obtain $\hat{D}_{V,s_P}^{(i)}$ from the weighted $S_V^P$ by point-wise distribution estimation;
248
+ 8: end for
249
+ 9: Optimize $h_\theta$ by $\min_{\theta}\sum_{i = 1}^{k}\mathbb{E}_{(x,y)\in S_{XY}^{(i)}}[l(h_\theta (x),y)] + \beta \sum_{i = 1}^{k}W(\hat{D}_{V,s_P}^{(i)},\hat{D}_V^{(i)})$
250
+ 10: end for
251
+ Prediction Phase:
252
+ 11: Obtain $\hat{Q}_X$ by kernel density estimation;
253
+ 12: $S_V^P = \{s(x,y):(x,y)\in S_{XY}^P\}$ .
254
+ 13: Weight all $v\in S_V^P$ with normalized $\frac{\mathrm{d}\hat{Q}_X(x)}{\mathrm{d}\hat{P}_X(x)}$ , where $x$ is the feature that $(x,y)\in S_{XY}^{P},s(x,y) = v;$
255
+ 14: $\tau = 1 - \alpha$ quantile of the weighted $S_V^P$ .
256
+ 15: for $(x,y)\in S_{XY}^{Q}$ do
257
+ 16: $C(x) = \{\hat{y}:s(x,\hat{y})\leq \tau ,\hat{y}\in \mathcal{Y}\}$ .
258
+ 17: end for
259
+
260
+ $i = 1,\dots,k$ . Given that calibration and training data are commonly assumed to follow the same distribution in CP, we sampled $S_{XY}^{P}$ from the union of the $k$ subsets. Additionally, we generated $10k$ test sets for each dataset in every trial. A multi-layer perceptron (MLP) with an architecture of (input dimension, 64, 64, 1) was utilized in all experimental setups to maintain comparison fairness. The detailed information about datasets and sampling procedure is shown in Appendix F.1. The code of our work is released on https://github.com/rxu0112/WR-CP.
261
+
262
+ # 6.2 CORRELATION BETWEEN WASSERSTEIN DISTANCE AND COVERAGE GAP
263
+
264
+ We demonstrated Wasserstein distance can indicate coverage gap changes across $\alpha$ from 0.1 to 0.9 comprehensively, as illustrated in Figure 1(b). Specifically, for each dataset, $h_\theta$ was optimized by empirical risk minimization. Then, we applied vanilla conformal prediction to each test set and calculated the average value of coverage gaps for $\alpha$ values from 0.1 to 0.9. Meanwhile, we also computed the Wasserstein distances between the calibration and each test conformal score distributions. Our findings highlighted a strong positive monotonic relationship between Wasserstein distance and the average value, indicating its sensitivity to coverage gap changes across different $\alpha$ .
265
+
266
+ Baselines. Three baseline distance measures were selected. First of all, total variation (TV) distance was chosen as Barber et al. (2023) aimed to use it to bound coverage gap. Besides, Kullback-Leibler (KL)-divergence and expectation differenc $(\Delta \mathbb{E})$ were selected as they are widely applied in domain adaptation researches (Nguyen et al., 2021; Magliacane et al., 2018).
267
+
268
+ Metric. We applied Spearman's coefficient, $-1 \leq r_s \leq 1$ to quantify the monotonic relationship between distance measures and the average coverage gap. The absolute value of the coefficient represents the strength of the correlation. Its sign indicates if a correlation is positive or negative. A higher positive $r_s$ means a stronger positive monotonic relation. We show the detailed definition of Spearman's coefficient in Appendix F.2.
269
+
270
+ Result. Table 1 presents Spearman's coefficients between distance measures and the average coverage gap across the six datasets, with the standard deviations shown in parentheses. The highest coefficient is bold and the second-highest coefficient is underlined. The result shows that the Wasserstein distance consistently exhibits a high coefficient, suggesting that Wasserstein distance is an effective indicator of the average coverage gap, and establishing it as a suitable optimization metric for maintaining coverage guarantees across various $\alpha$ values.
271
+
272
+ Table 1: Spearman's coefficients between distance measures and the average coverage gap
273
+
274
+ <table><tr><td>Dataset</td><td>Airfoil</td><td>PeMSD4</td><td>PeMSD8</td><td>Seattle</td><td>U.S.</td><td>Japan</td></tr><tr><td>W</td><td>0.59 (0.24)</td><td>0.84 (0.03)</td><td>0.90 (0.03)</td><td>0.84 (0.05)</td><td>0.77 (0.06)</td><td>0.57 (0.05)</td></tr><tr><td>TV</td><td>0.45 (0.16)</td><td>0.88 (0.03)</td><td>0.86 (0.06)</td><td>0.75 (0.09)</td><td>0.67 (0.10)</td><td>0.37 (0.06)</td></tr><tr><td>KL</td><td>0.40 (0.21)</td><td>0.49 (0.17)</td><td>0.51 (0.09)</td><td>0.45 (0.17)</td><td>0.60 (0.11)</td><td>0.53 (0.05)</td></tr><tr><td>ΔE</td><td>0.55 (0.19)</td><td>0.78 (0.05)</td><td>0.85 (0.04)</td><td>0.71 (0.06)</td><td>0.68 (0.08)</td><td>0.37 (0.09)</td></tr></table>
275
+
276
+ # 6.3 EVALUATION OF WR-CP IN WASSERSTEIN DISTANCE MINIMIZATION
277
+
278
+ We proved that WR-CP, utilizing importance weighting, can effectively minimize the Wasserstein distances resulting from both concept shift and covariate shift.
279
+
280
+ Baselines. Besides WR-CP, we also conducted vanilla CP and IW-CP on all sampled datasets.
281
+
282
+ Metric. These approaches were compared based on the Wasserstein distance between calibration and test conformal scores. To place greater emphasis on the vertical coverage gap between conformal score CDFs, the distances were normalized to mitigate the impact of varying score scales across datasets, enabling more meaningful comparisons.
283
+
284
+ Result. Figure 3 shows that WR-CP consistently reduces Wasserstein distance. The extent of these reductions is dependent on the value of $\beta$ . However, despite the ability to address covariate-shift-induced Wasserstein distance, importance weighting may not always lead to a reduction, as seen in the case of the Seattle-loop dataset. Further explanation of the phenomenon is provided in Appendix C.
285
+
286
+ ![](images/33f384cc5747e04a9f14dbf66789429fd016e777431701d7da13d6445036a10a.jpg)
287
+ Figure 3: Comparison of vanilla CP, IW-CP, and WR-CP based on normalized Wasserstein distance between calibration and test conformal scores: IW-CP can only address the distance caused by covariate shift, while WR-CP reduces the distance from concept shift. The $\beta$ values for the WR-CP method are 9, 11, 9, 10, 13, and 13, respectively.
288
+
289
+ # 6.4 ROBUST AND EFFICIENT PREDICTION SETS BY WR-CP
290
+
291
+ We experimentally demonstrated that, compared with prior works, WR-CP is capable of reducing coverage gap without significantly sacrificing prediction efficiency.
292
+
293
+ Baselines. Besides vanilla CP and IW-CP (Tibshirani et al., 2019), conformalized quantile regression (CQR) (Romano et al., 2019) was chosen as a representative method for adaptive CP. We also included the worst-case conformal prediction (WC-CP), which is an implement of the worst-case approach proposed by Genderl et al. (2021); Cauchois et al. (2024); Zou & Liu (2024) in the convex hull setup.
294
+
295
+ Metric. We compared the coverage gaps and sizes of prediction sets generated by WR-CP and baselines as $\alpha$ ranges from 0.1 to 0.9 across all sampled datasets. Prediction sets are better when actual coverages are more concentrated around $1 - \alpha$ and have smaller sizes.
296
+
297
+ Result. With $\alpha = 0.2$ , Figure 4 confirms that WR-CP consistently exhibits the most concentrated coverages around $1 - \alpha$ compared to vanilla CP, IW-CP, and CQR across datasets. While WC-CP maintains coverage guarantees under joint distribution shift, it leads to inefficient predictions. In contrast, WR-CP mitigates this inefficiency through smaller set sizes. We show the results with other $\alpha$ values in Appendix F.3. It is important to observe that vanilla CP and IW-CP always have smaller prediction sets than WR-CP. Since WR-CP is trained with the additional Wasserstein regularization term in Eq. (19), the trade-off inevitably causes an increase in prediction errors, which are proportional to conformal scores. Consequently, methods based on empirical risk minimization,
298
+
299
+ ![](images/e576c8ba1afe0ee157419ddfb5e51f3197988bf1ef324b11bbcb9dbe8d9e2e73.jpg)
300
+ Figure 4: Coverages and prediction set sizes of WR-CP and baselines with $\alpha = 0.2$ : WR-CP makes coverages on test data more concentrated around the $1 - \alpha$ level compared to vanilla CP, IW-CP, and CQR. While WC-CP ensures coverage guarantees, it leads to inefficient predictions due to large set sizes, whereas WR-CP mitigates this inefficiency. The $\beta$ values for the WR-CP method are 4.5, 9, 9, 6, 8, and 20, respectively.
301
+
302
+ ![](images/be99376d7aba7e6d93ef273c7f6328b5efed8f185b79da7e05688e1a632ee8ac.jpg)
303
+ Figure 5: Pareto fronts of coverage gap and prediction set size obtained from WR-CP with varying $\beta$ : WR-CP effectively balances conformal prediction accuracy and efficiency, providing a flexible and customizable solution. When $\beta = 0$ , WR-CP returns to IW-CP.
304
+
305
+ like vanilla CP and IW-CP, tend to yield smaller prediction sets compared to WR-CP due to their lower conformal scores. We further discuss the trade-off in WR-CP in Subsection 6.5. Lastly, we can see IW-CP have worse coverages than vanilla CP on Seattle-loop dataset, reflecting the fact that importance weighting enlarges Wasserstein distance on that dataset in Figure 3.
306
+
307
+ # 6.5 ABLATION STUDY
308
+
309
+ As outlined in Eq. (19), WR-CP is regulated by a hyperparameter $\beta$ , which governs the trade-off between coverage gap and prediction set size. It is essential to investigate the performance of WR-CP under different $\beta$ values, which are listed in Appendix F.4. To achieve this, we conducted WR-CP on all sampled datasets with varying $\beta$ values. At each $\beta$ value, we calculated the average coverage gap and set size over $\alpha$ from 0.1 to 0.9. Finally, we obtained a Pareto front for each dataset in Figure 5. In particular, when $\beta = 0$ , WR-CP reverts back to IW-CP, so we emphasize the outcomes in this scenario as boundary solutions derived from IW-CP. The results indicate that WR-CP allows users to customize the approach based on their preferences for conformal prediction accuracy and efficiency. We further explore whether WR-CP can achieve efficient prediction with a coverage guarantee in Appendix G. The limitations of our study are presented in Appendix H.
310
+
311
+ # 7 CONCLUSION
312
+
313
+ In this work, we point out that the coverage gap of conformal prediction under joint distribution shift relies on the distance between the CDFs of calibration and test conformal score distributions. Based on this observation, we propose an upper bound of coverage gap utilizing Wasserstein distance, offering better identifiability of gap changes at different $\alpha$ . We conduct a detailed analysis of the bound by utilizing probability measure pushforwards from the shifted joint data distribution to conformal score distributions. This approach allows us to explore the separation of the impact of covariate and concept shifts on the coverage gap. Based on the separation, we design Wasserstein-regularized conformal prediction (WR-CP) via importance weighting and regularized representation learning, which can obtain accurate and efficient prediction sets with controllable balance. The performance of WR-CP is experimentally analyzed with diverse baselines and datasets.
314
+
315
+ # ACKNOWLEDGMENT
316
+
317
+ Sihong Xie was supported by the Department of Science and Technology of Guangdong Province (Grant No. 2023CX10X079), the National Key R&D Program of China (Grant No. 2023YFF0725001), the Guangzhou-HKUST(GZ) Joint Funding Program (Grant No. 2023A03J0008), and Education Bureau Guangzhou Municipality.
318
+
319
+ # REFERENCES
320
+
321
+ Salim I Amoukou and Nicolas JB Brunel. Adaptive conformal prediction by reweighting nonconformity score. arXiv preprint arXiv:2303.12695, 2023.
322
+ Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511, 2021.
323
+ Anastasios N Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, and Tal Schuster. Conformal risk control. arXiv preprint arXiv:2208.02814, 2022.
324
+ Liviu Aolaritei, Nicolas Lanzetti, Hongruyu Chen, and Florian Dörfler. Distributional uncertainty propagation via optimal transport. arXiv preprint arXiv:2205.00343, 2022.
325
+ Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. Conformal prediction beyond exchangeability. The Annals of Statistics, 51(2):816-845, 2023.
326
+ Alberto Bernacchia and Simone Pigolotti. Self-consistent method for density estimation. Journal of the Royal Statistical Society Series B: Statistical Methodology, 73(3):407-422, 2011.
327
+ Pope D. Brooks, Thomas and Michael Marcolini. Airfoil Self-Noise. UCI Machine Learning Repository, 2014. DOI: https://doi.org/10.24432/C5VW2C.
328
+ Maxime Cauchois, Suyash Gupta, and John C Duchi. Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction. Journal of machine learning research, 22 (81):1-42, 2021.
329
+ Maxime Cauchois, Suyash Gupta, Alnur Ali, and John C Duchi. Robust validation: Confident predictions even when distributions shift. Journal of the American Statistical Association, pp. 1-66, 2024.
330
+ Nicolo Colombo. Normalizing flows for conformal regression. arXiv preprint arXiv:2406.03346, 2024.
331
+ Zhiyong Cui, Kristian Henrickson, Ruimin Ke, and Yinhai Wang. Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting. IEEE Transactions on Intelligent Transportation Systems, 2019.
332
+ Songgaojun Deng, Shusen Wang, Huzefa Rangwala, Lijing Wang, and Yue Ning. Cola-gnn: Cross-location attention based graph neural networks for long-term ili prediction. In Proceedings of the 29th ACM international conference on information & knowledge management, pp. 245-254, 2020.
333
+ Richard Mansfield Dudley. The speed of mean glivenko-cantelli convergence. The Annals of Mathematical Statistics, 40(1):40-50, 1969.
334
+ Bat-Sheva Einbinder, Stephen Bates, Anastasios N Angelopoulos, Asaf Gender, and Yaniv Romano. Conformal prediction is robust to label noise. arXiv preprint arXiv:2209.14295, 2, 2022a.
335
+ Bat-Sheva Einbinder, Yaniv Romano, Matteo Sesia, and Yanfei Zhou. Training uncertainty-aware classifiers with conformalized deep learning. Advances in Neural Information Processing Systems, 35:22380-22395, 2022b.
336
+ Shai Feldman, Stephen Bates, and Yaniv Romano. Improving conditional coverage via orthogonal quantile regression. Advances in neural information processing systems, 34:2060-2071, 2021.
337
+
338
+ Di Feng, Ali Harakeh, Steven L Waslander, and Klaus Dietmayer. A review and comparative study on probabilistic object detection in autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 23(8):9961-9980, 2021.
339
+ Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. The limits of distribution-free conditional predictive inference. Information and Inference: A Journal of the IMA, 10(2):455-482, 2021.
340
+ Robert E Gaunt and Siqi Li. Bounding kolmogorov distances through Wasserstein and related integral probability metrics. Journal of Mathematical Analysis and Applications, 522(1):126985, 2023.
341
+ Asaf Gendler, Tsui-Wei Weng, Luca Daniel, and Yaniv Romano. Adversarily robust conformal prediction. In International Conference on Learning Representations, 2021.
342
+ Isaac Gibbs and Emmanuel Candes. Adaptive conformal inference under distribution shift. Advances in Neural Information Processing Systems, 34:1660-1672, 2021.
343
+ Isaac Gibbs and Emmanuel J Candès. Conformal inference for online prediction with arbitrary distribution shifts. Journal of Machine Learning Research, 25(162):1-36, 2024.
344
+ Isaac Gibbs, John J Cherian, and Emmanuel J Candès. Conformal prediction with conditional guarantees. arXiv preprint arXiv:2305.12616, 2023.
345
+ Leying Guan. Localized conformal prediction: A generalized inference framework for conformal prediction. Biometrika, 110(1):33-50, 2023.
346
+ Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 922-929, 2019.
347
+ Xing Han, Ziyang Tang, Joydeep Ghosh, and Qiang Liu. Split localized conformal prediction, 2023. URL https://arxiv.org/abs/2206.13092.
348
+ Pierre Humbert, Batiste Le Bars, Aurélien Bellet, and Sylvain Arlot. One-shot federated conformal prediction. In International Conference on Machine Learning, pp. 14153-14177. PMLR, 2023.
349
+ Christopher Jung, Georgy Noarov, Ramya Ramalingam, and Aaron Roth. Batch multivalid conformal prediction. arXiv preprint arXiv:2209.15145, 2022.
350
+ Wouter M Kouw and Marco Loog. An introduction to domain adaptation and transfer learning. arXiv preprint arXiv:1812.11806, 2018.
351
+ David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International conference on machine learning, pp. 5815-5826. PMLR, 2021.
352
+ Yi Liu, Alexander W Levis, Sharon-Lise Normand, and Larry Han. Multi-source conformal inference under distribution shift. arXiv preprint arXiv:2405.09331, 2024.
353
+ Charles Lu, Yaodong Yu, Sai Praneeth Karimireddy, Michael Jordan, and Ramesh Raskar. Federated conformal predictors for distributed uncertainty quantification. In International Conference on Machine Learning, pp. 22942-22964. PMLR, 2023.
354
+ Sara Magliacane, Thijs Van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, and Joris M Mooij. Domain adaptation by using causal inference to predict invariant conditional distributions. Advances in neural information processing systems, 31, 2018.
355
+ A Tuan Nguyen, Toan Tran, Yarin Gal, Philip HS Torr, and Atilim Güneş Baydin. Kl guided domain adaptation. arXiv preprint arXiv:2106.07780, 2021.
356
+ Travis A O'Brien, Karthik Kashinath, Nicholas R Cavanaugh, William D Collins, and John P O'Brien. A fast and objective multidimensional kernel density estimation method: fastkde. Computational Statistics & Data Analysis, 101:148-160, 2016.
357
+
358
+ Victor M Panaretos and Yoav Zemel. Statistical aspects of wasserstein distances. Annual review of statistics and its application, 6(1):405-431, 2019.
359
+ Harris Papadopoulos, Kostas Proedrou, Volodya Vovk, and Alex Gammerman. Inductive confidence machines for regression. In Machine learning: ECML 2002: 13th European conference on machine learning Helsinki, Finland, August 19-23, 2002 proceedings 13, pp. 345-356. Springer, 2002.
360
+ Fabian Pedregosa, Gáel Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. the Journal of machine learning research, 12:2825–2830, 2011.
361
+ Vincent Plassier, Mehdi Makni, Aleksandr Rubashevskii, Eric Moulines, and Maxim Panov. Conformal prediction for federated uncertainty quantification under label shift. In International Conference on Machine Learning, pp. 27907-27947. PMLR, 2023.
362
+ Yaniv Romano, Evan Patterson, and Emmanuel Candes. Conformalized quantile regression. Advances in neural information processing systems, 32, 2019.
363
+ Yaniv Romano, Matteo Sesia, and Emmanuel Candes. Classification with valid and adaptive coverage. Advances in Neural Information Processing Systems, 33:3581-3591, 2020.
364
+ Nathan Ross. Fundamentals of stein's method. 2011.
365
+ Hyun-Sun Ryu and Kwang Sun Ko. Sustainable development of fintech: Focused on uncertainty and perceived quality issues. Sustainability, 12(18):7669, 2020.
366
+ Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
367
+ Filippo Santambrogio. Optimal transport for applied mathematicians. *Birkhäuser*, NY, 55(58-63):94, 2015.
368
+ Silvia Seoni, Vicnesh Jahmunah, Massimo Salvi, Prabal Datta Barua, Filippo Molinari, and U Rajendra Acharya. Application of uncertainty quantification to artificial intelligence in healthcare: A review of last decade (2013-2023). Computers in Biology and Medicine, pp. 107441, 2023.
369
+ Matteo Sesia, YX Wang, and Xin Tong. Adaptive conformal classification with noisy labels. arXiv preprint arXiv:2309.05092, 2023.
370
+ Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction, 2007. URL https:// arxiv.org/abs/0706.3188.
371
+ David Stutz, Ali Taylan Cemgil, Arnaud Doucet, et al. Learning optimal conformal classifiers. arXiv preprint arXiv:2110.09192, 2021.
372
+ Ryan J Tibshirani, Rina Foygel Barber, Emmanuel Candes, and Aaditya Ramdas. Conformal prediction under covariate shift. Advances in neural information processing systems, 32, 2019.
373
+ Vladimir Vapnik. Principles of risk minimization for learning theory. Advances in neural information processing systems, 4, 1991.
374
+ Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. Algorithmic learning in a random world, volume 29. Springer, 2005.
375
+ Jonathan Weed and Francis Bach. Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance. 2019.
376
+ Chen Xu and Yao Xie. Conformal prediction interval for dynamic time-series. In International Conference on Machine Learning, pp. 11559-11569. PMLR, 2021.
377
+ Ge Yan, Yaniv Romano, and Tsui-Wei Weng. Provably robust conformal prediction with improved efficiency. arXiv preprint arXiv:2404.19651, 2024.
378
+ Xin Zou and Weiwei Liu. Coverage-guaranteed prediction sets for out-of-distribution data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 17263-17270, 2024.
379
+
380
+ # A PROOFS OF THEOREMS
381
+
382
+ # A.1 PROOF OF THEOREM 1
383
+
384
+ Proof. We define $f \times g$ by $f \times g(x_1, x_2) = (f(x_1), g(x_2)) = (y_1, y_2)$ . Let $\operatorname{Id}_{\mathcal{X}}$ be the identity mapping function on $\mathcal{X}$ , and let $\pi_i$ be the mapping function to the $i$ -th marginal. The proof follows Proposition 3 in the work by Aolaritei et al. (2022).
385
+
386
+ First, we prove the inclusion that $(f\times g)\# \Gamma (\mu ,\nu)\subset \Gamma (f_{\#}\mu ,g_{\#}\nu)$ . Consider $\gamma \in \Gamma (\mu ,\nu)$ , so it is equivalent to prove that $(f\times g)_{\#}\gamma \in \Gamma (f_{\#}\mu ,g_{\#}\nu)$ , which means the marginals of $(f\times g)_{\#}\gamma$ are $f_{\#}\mu$ and $g_{\#}\nu$ . For any continuous and bounded function $\phi :\mathcal{V}\to \mathbb{R}$ , we have
387
+
388
+ $$
389
+ \begin{array}{l} \int_ {\mathcal {Y} \times \mathcal {Y}} \phi (y _ {1}) \mathrm {d} ((f \times g) _ {\#} \gamma) (y _ {1}, y _ {2}) = \int_ {\mathcal {X} \times \mathcal {X}} \phi (f (x _ {1})) \mathrm {d} \gamma (x _ {1}, x _ {2}) \tag {20} \\ = \int_ {\mathcal {X}} \phi (f (x _ {1})) \mathrm {d} \mu (x _ {1}) = \int_ {\mathcal {Y}} \phi (y _ {1}) \mathrm {d} (f _ {\#} \mu) (y _ {1}), \\ \end{array}
390
+ $$
391
+
392
+ so we obtain $\pi_{1\#}((f\times g)_{\#}\gamma) = f_{\#}\mu$ and similarly derive $\pi_{2\#}((f\times g)_{\#}\gamma) = g_{\#}\nu$
393
+
394
+ Secondly, we need to prove $\Gamma(f_{\#}\mu,g_{\#}\nu) \subset (f \times g)\# \Gamma(\mu,\nu)$ . With $\gamma' \in \Gamma(f_{\#}\mu,g_{\#}\nu)$ , we seek $\gamma \in \Gamma(\mu,\nu)$ such that $(f \times g)_{\#}\gamma = \gamma'$ . To do so, let $\gamma_{12} := (\mathrm{Id}_{\mathcal{X}} \times f)_{\#}\mu \in \Gamma(\mu,f_{\#}\mu)$ , $\gamma_{23} := \gamma' \in \Gamma(f_{\#}\mu,g_{\#}\nu)$ , and $\gamma_{34} := (g \times \mathrm{Id}_{\mathcal{X}})_{\#}\nu \in \Gamma(g_{\#}\nu,\nu)$ . As $\pi_{2\#}\gamma_{12} = \pi_{1\#}\gamma_{23} = f_{\#}\mu$ and $\pi_{1\#}\gamma_{34} = \pi_{2\#}\gamma_{23} = g_{\#}\nu$ , Santambrogio (2015) ensures a joint probability measure $\bar{\gamma}$ on $\mathcal{X} \times \mathcal{Y} \times \mathcal{Y} \times \mathcal{X}$ satisfying $(\pi_1 \times \pi_2)_{\#}\bar{\gamma} = \gamma_{12}$ , $(\pi_2 \times \pi_3)_{\#}\bar{\gamma} = \gamma_{23}$ , and $(\pi_3 \times \pi_4)_{\#}\bar{\gamma} = \gamma_{34}$ . We demonstrate that $\gamma := (\pi_1 \times \pi_4)_{\#}\bar{\gamma}$ is the probability measure we are seeking. For this, we prove $\gamma \in \Gamma(\mu,\nu)$ with any continuous and bounded function $\phi: \mathcal{X} \to \mathbb{R}$ by
395
+
396
+ $$
397
+ \begin{array}{l} \int_ {\mathcal {X} \times \mathcal {X}} \phi \left(x _ {i}\right) \mathrm {d} \gamma \left(x _ {1}, x _ {2}\right) = \int_ {\mathcal {X} \times \mathcal {Y} \times \mathcal {Y} \times \mathcal {X}} \phi \left(x _ {1}\right) \mathrm {d} \bar {\gamma} \left(x _ {1}, y _ {1}, y _ {2}, x _ {2}\right) \tag {21} \\ = \int_ {\mathcal {X} \times \mathcal {Y}} \phi (x _ {1}) \mathrm {d} \gamma_ {1 2} (x _ {1}, y _ {1}) = \int_ {\mathcal {X}} \phi (x _ {1}) \mathrm {d} \mu (x _ {1}). \\ \end{array}
398
+ $$
399
+
400
+ Eq. (21) indicates $\pi_{1\#}\gamma = \mu$ . Similarly, we can derive $\pi_{2\#}\gamma = \nu$ . As a result, we can prove $(f\times g)_{\#}\gamma = \gamma^{\prime}$ with any continuous and bounded function $\phi :\mathcal{V}\times \mathcal{V}\to \mathbb{R}$ by
401
+
402
+ $$
403
+ \begin{array}{l} \int_ {\mathcal {Y} \times \mathcal {Y}} \phi (y _ {1}, y _ {2}) \mathrm {d} ((f \times g) _ {\#} \gamma) (x _ {1}, x _ {2}) \\ = \int_ {\mathcal {X} \times \mathcal {X}} \phi (f (x _ {1}), g (x _ {2})) d \gamma (x _ {1}, x _ {2}) \\ = \int_ {\mathcal {X} \times \mathcal {Y} \times \mathcal {Y} \times \mathcal {X}} \phi (f (x _ {1}), g (x _ {2})) \mathrm {d} \bar {\gamma} (x _ {1}, y _ {1}, y _ {2}, x _ {2}) \tag {22} \\ = \int_ {\mathcal {X} \times \mathcal {Y} \times \mathcal {Y} \times \mathcal {X}} \phi (y _ {1}, y _ {2}) \mathrm {d} \bar {\gamma} (x _ {1}, y _ {1}, y _ {2}, x _ {2}) \\ = \int_ {\mathcal {Y} \times \mathcal {Y}} \phi (y _ {1}, y _ {2}) \mathrm {d} \gamma_ {2 3} (y _ {1}, y _ {2}) = \int_ {\mathcal {Y} \times \mathcal {Y}} \phi (y _ {1}, y _ {2}) \mathrm {d} \gamma^ {\prime} (y _ {1}, y _ {2}). \\ \end{array}
404
+ $$
405
+
406
+ As $(f\times g)\# \Gamma (\mu ,\nu)\subset \Gamma (f_{\#}\mu ,g_{\#}\nu)$ and $\Gamma (f_{\#}\mu ,g_{\#}\nu)\subset (f\times g)\# \Gamma (\mu ,\nu)$ , we obtain $(f\times g)\# \Gamma (\mu ,\nu) = \Gamma (f_{\#}\mu ,g_{\#}\nu)$ . Finally, we prove Theorem 1 by
407
+
408
+ $$
409
+ \begin{array}{l} W (\mu_ {f}, \nu_ {g}) = W (f _ {\#} \mu , g _ {\#} \nu) \\ = \inf _ {\gamma^ {\prime} \in \Gamma (f _ {\#} \mu , g _ {\#} \nu)} c _ {\mathcal {Y}} (y _ {1}, y _ {2}) \mathrm {d} \gamma^ {\prime} (y _ {1}, y _ {2}) \\ = \inf _ {\gamma^ {\prime} \in (f \times g) _ {\#} \Gamma (\mu , \nu)} \int_ {\mathcal {Y} \times \mathcal {Y}} c _ {\mathcal {Y}} \left(y _ {1}, y _ {2}\right) \mathrm {d} \gamma^ {\prime} \left(y _ {1}, y _ {2}\right) \tag {23} \\ = \inf _ {\gamma \in \Gamma (\mu , \nu)} \int_ {\mathcal {Y} \times \mathcal {Y}} c _ {\mathcal {Y}} (y _ {1}, y _ {2}) d ((f \times g) _ {\#} \gamma) (y _ {1}, y _ {2}) \\ = \inf _ {\gamma \in \Gamma (\mu , \nu)} \int_ {\mathcal {Y} \times \mathcal {Y}} c _ {\mathcal {Y}} (f (x _ {1}), g (x _ {2})) \mathrm {d} \gamma (y _ {1}, y _ {2}) \\ \end{array}
410
+ $$
411
+
412
+ ![](images/29c7df11d6704d20ff247f531775c0b0385495dea6ffa63e4e689fa8bf62435b.jpg)
413
+
414
+ # A.2 PROOF OF THEOREM 2
415
+
416
+ Proof. Let $\gamma' \in \Gamma(\mu_f, \nu_f)$ be the pushforward of $\gamma \in \Gamma(\mu, \nu)$ via function $f \times f$ . We can apply Theorem 1 to $W(\mu_f, \nu_f)$ and obtain
417
+
418
+ $$
419
+ W \left(\mu_ {f}, \nu_ {f}\right) = \inf _ {\gamma \in \Gamma (\mu , \nu)} \int_ {\mathcal {X} \times \mathcal {X}} c _ {\mathcal {Y}} \left(f \left(x _ {1}\right), f \left(x _ {2}\right)\right) \mathrm {d} \gamma \left(x _ {1}, x _ {2}\right). \tag {24}
420
+ $$
421
+
422
+ If the optimal transport plan for $W(\mu, \nu)$ is $\gamma^*$ , and $\kappa$ bounds the Lipschitz continuity of $f$ , we have
423
+
424
+ $$
425
+ \begin{array}{l} W \left(\mu_ {f}, \nu_ {f}\right) \leq \int_ {\mathcal {X} \times \mathcal {X}} c _ {\mathcal {Y}} \left(f \left(x _ {1}\right), f \left(x _ {2}\right)\right) \mathrm {d} \gamma^ {*} \left(x _ {1}, x _ {2}\right) \tag {25} \\ \leq \int_ {\mathcal {X} \times \mathcal {X}} \kappa c _ {\mathcal {X}} (x _ {1}, x _ {2}) \mathrm {d} \gamma^ {*} (x _ {1}, x _ {2}) = \kappa W (\mu , \nu). \\ \end{array}
426
+ $$
427
+
428
+ In Eq. (25), the first inequality holds because $\gamma^{*}$ may not be the optimal transport plan for $W(\mu_f,\nu_f)$ and the second inequality holds due to the definition of $\kappa$ .
429
+
430
+ # A.3 PROOF OF THEOREM 3
431
+
432
+ Proof. As Wasserstein distance satisfies triangle inequality, $W(\mu, \nu)$ and $W(\hat{\mu}_n, \hat{\nu}_m)$ follow
433
+
434
+ $$
435
+ W (\mu , \nu) \leq W \left(\hat {\mu} _ {n}, \mu\right) + W \left(\hat {\mu} _ {n}, \nu\right) \leq W \left(\hat {\mu} _ {n}, \mu\right) + W \left(\hat {\mu} _ {n}, \hat {\nu} _ {m}\right) + W \left(\hat {\nu} _ {m}, \nu\right). \tag {26}
436
+ $$
437
+
438
+ Given $\mathbb{E}[W(\mu, \hat{\mu}_n)] \leq \lambda_\mu n^{-1/\sigma_\mu}$ and $\mathbb{E}[W(\nu, \hat{\nu}_m)] \leq \lambda_\nu m^{-1/\sigma_\nu}$ from Proposition 2, with probabilities at least $1 - e^{-2nt_\mu^2}$ and $1 - e^{-2mt_\nu^2}$ , respectively, we have
439
+
440
+ $$
441
+ W \left(\mu , \hat {\mu} _ {n}\right) \leq \lambda_ {\mu} n ^ {- 1 / \sigma_ {\mu}} + t _ {\mu}, W \left(\nu , \hat {\nu} _ {m}\right) \leq \lambda_ {\nu} m ^ {- 1 / \sigma_ {\nu}} + t _ {\nu}. \tag {27}
442
+ $$
443
+
444
+ It is reasonable to assume the two events in Eq. (27) are independent, so we can apply them to Eq. (26), and thus obtain Eq. (15) with probability at least $(1 - e^{-2nt_{\mu}^2})(1 - e^{-2mt_{\nu}^2})$ .
445
+
446
+ # A.4 PROOF OF THEOREM 4
447
+
448
+ Proof. We denote $F_{\mu}, F_{\nu}$ , and $F_{\nu^{(i)}}$ the corresponding CDFs of $\mu, \nu$ , and $\nu^{(i)}$ for $i = 1, \dots, k$ .
449
+
450
+ When two distributions are on the real number set $\mathbb{R}$ with Euclidean distance, $W$ of the two distributions equals the area between their CDFs. Therefore, the 1-Wasserstein distance between $\mu$ and $\nu$ is given by
451
+
452
+ $$
453
+ W (\mu , \nu) = \int_ {\mathcal {X}} | F _ {\mu} (x) - F _ {\nu} (x) | \mathrm {d} x. \tag {28}
454
+ $$
455
+
456
+ Since $\nu = \sum_{i=1}^{k} w_i \nu^{(i)}$ , we have $F_\nu(x) = \sum_{i=1}^{k} w_i F_{\nu^{(i)}}(x)$ . As $\nu, \nu^{(i)}$ , and $\mu$ are defined on $\mathcal{X} \subseteq \mathbb{R}$ , we can derive
457
+
458
+ $$
459
+ \begin{array}{l} W (\mu , \nu) = \int_ {\mathcal {X}} | F _ {\mu} (x) - F _ {\nu} (x) | \mathrm {d} x = \int_ {\mathcal {X}} \left| F _ {\mu} (x) - \sum_ {i = 1} ^ {k} w _ {i} F _ {\nu^ {(i)}} (x) \right| \mathrm {d} x \\ = \int_ {\mathcal {X}} \left| \sum_ {i = 1} ^ {k} w _ {i} F _ {\mu} (x) - \sum_ {i = 1} ^ {k} w _ {i} F _ {\nu^ {(i)}} (x) \right| \mathrm {d} x = \int_ {\mathcal {X}} \left| \sum_ {i = 1} ^ {k} w _ {i} \left(F _ {\mu} (x) - F _ {\nu^ {(i)}} (x)\right) \right| \mathrm {d} x \tag {29} \\ \leq \int_ {\mathcal {X}} \sum_ {i = 1} ^ {k} w _ {i} \left| F _ {\mu} (x) - F _ {\nu^ {(i)}} (x) \right| \mathrm {d} x = \sum_ {i = 1} ^ {k} w _ {i} \int_ {\mathcal {X}} \left| F _ {\mu} (x) - F _ {\nu^ {(i)}} (x) \right| \mathrm {d} x \\ = \sum_ {i = 1} ^ {k} w _ {i} W (\mu , \nu^ {(i)}). \\ \end{array}
460
+ $$
461
+
462
+ ![](images/f4bb8a35aa7159ba17ea8d59044d804c318b007cac9ef39fd2233f8392784adf.jpg)
463
+
464
+ # B COMPARISON BETWEEN TOTAL VARIATION AND WASSERSTEIN DISTANCE
465
+
466
+ The total variation (TV) distance between two univariate distributions is defined as half of the absolute area between their probability density functions (PDFs). For instance, given two distributions $\mu$ and $\nu$ with PDFs $p_{\mu}$ and $p_{\nu}$ , respectively, on space $\mathbb{R}_{\geq 0}$ , the TV distance is given by
467
+
468
+ $$
469
+ T V (\mu , \nu) = \frac {1}{2} \int_ {\mathbb {R} _ {\geq 0}} | p _ {\mu} (x) - p _ {\nu} (x) | \mathrm {d} x. \tag {30}
470
+ $$
471
+
472
+ In contrast, we expand $W(\mu ,\nu)$ according to Eq. (28) by
473
+
474
+ $$
475
+ \begin{array}{l} W (\mu , \nu) = \int_ {\mathbb {R} _ {\geq 0}} | F _ {\mu} (x) - F _ {\nu} (x) | \mathrm {d} x = \int_ {\mathbb {R} _ {\geq 0}} \left| \int_ {0} ^ {x} p _ {\mu} (t) \mathrm {d} t - \int_ {0} ^ {x} p _ {\nu} (t) \mathrm {d} t \right| \mathrm {d} x \tag {31} \\ = \int_ {\mathbb {R} _ {\geq 0}} \left| \int_ {0} ^ {x} p _ {\mu} (t) - p _ {\nu} (t) \mathrm {d} t \right| \mathrm {d} x. \\ \end{array}
476
+ $$
477
+
478
+ The inner integration between 0 and $x$ indicates Wasserstein distance cares where two distributions $\mu$ and $\nu$ differ, whereas the total variation distance in Eq. (30) does not take this into consideration.
479
+
480
+ We would like to introduce a toy example to illustrate further why total variation distance can not consistently capture the closeness between two cumulative distribution functions (CDFs). Consider three conformal score distributions $P_V, Q_V^{(1)}, Q_V^{(2)}$ on space $\mathbb{R}_{\geq 0}$ with their PDFs:
481
+
482
+ $$
483
+ p _ {P _ {V}} (v) = 1, v \in [ 0, 1 ];
484
+ $$
485
+
486
+ $$
487
+ p _ {Q _ {V} ^ {(1)}} (v) = \left\{ \begin{array}{l l} 1 & \text {i f} v \in [ 0, 0. 9 ], \\ 2 & \text {i f} v \in (0. 9, 0. 9 5 ]; \end{array} \right.
488
+ $$
489
+
490
+ $$
491
+ p _ {Q _ {V} ^ {(2)}} (v) = \left\{ \begin{array}{l l} 2 & \text {i f} v \in [ 0, 0. 0 4 ], \\ 1 & \text {i f} v \in (0. 0 4, 0. 9 6 ]. \end{array} \right.
492
+ $$
493
+
494
+ Therefore, we calculate $TV(P_V, Q_V^{(1)}) = 0.05$ and $TV(P_V, Q_V^{(2)}) = 0.04$ , while $W(P_V, Q_V^{(1)}) = 0.0025$ and $W(P_V, Q_V^{(2)}) = 0.0384$ . In this example, a reduction in total variation distance results in a larger Wasserstein distance between two CDFs. Intuitively, TVD only measures the overall difference between two distributions without accounting for the specific locations where they diverge. In contrast, the Wasserstein distance will be high when divergence occurs early (i.e., at a small quantile), especially if the discrepancy persists until the "lagging" CDF catches up. We visualize the example in Figure 6.
495
+
496
+ ![](images/0d7c1cf42c77272903c3ff2ae763ca98d40470974c081a8c35bb2d3fc3e1f00f.jpg)
497
+
498
+ ![](images/7739f1e9995fbcd00c2739bed2996a7ef129eae533a06b997f84b399eac811d4.jpg)
499
+
500
+ ![](images/1ff36822269d0422d8b634f04a3741d825f131fbc9d9dd7ed52e2a3d7c56cab2.jpg)
501
+ Figure 6: Comparison between total variation distance and Wasserstein distance: a reduction in the total variation distance does not necessarily result in CDFs becoming closer.
502
+
503
+ ![](images/349c6e97a493e2c584a9ce081f41a35fea3a8b927f30924e6099b19870cd858e.jpg)
504
+
505
+ # C RATIONALE FOR DIFFERENTIATING COVARIATE AND CONCEPT SHIFTS
506
+
507
+ There are two key reasons to differentiate between covariate and concept shifts. First, making this distinction enables the application of importance weighting. Minimizing the Wasserstein regularization term inevitably increases prediction residuals. By applying importance weighting, we expect to reduce the distance, mitigating the adverse effects of regularization on optimizing the regression loss function in Eq. (19). Figure 3 shows this expectation is met on five out of the six datasets. This occurs because, in most cases, covariate shifts exacerbate the distance caused by concept shifts $(f_{P} \neq f_{Q})$ . Consequently, importance weighting effectively reduces this distance, as illustrated in Figure 7(a) and evidenced by the results for the airfoil self-noise, PeMSD4, PeMSD8, U.S.-States, and Japan-Prefectures datasets in Figure 3. However, there are instances where covariate shifts can alleviate the Wasserstein distance induced by concept shifts. In such cases, applying importance weighting may increase the distance, as demonstrated in the results for the Seattle-loop dataset in Figure 3. This phenomenon is further illustrated in Figure 7(b).
508
+
509
+ ![](images/7b0bb80d22dd51c437aa9770180009fe91c985c4b622836685dac452fa7487db.jpg)
510
+ Figure 7: Effect of importance weighting on Wasserstein distance: (a) Scenario where importance weighting reduces Wasserstein distance; (b) Scenario where importance weighting enlarges Wasserstein distance.
511
+
512
+ ![](images/f4076f96160cfe20b0b25f088d55d27f269a699e6a4c0018b8728f083b4a8c30.jpg)
513
+
514
+ Secondly, in multi-source CP, different training distributions $D_{XY}^{(i)}$ can suffer from different degrees of covariate and concept shifts. Importance weighting allows the regularized loss in Eq. (19) to minimize the distance between training conformal score distribution $D_V^{(i)}$ and its correspondingly weighted calibration conformal score distribution $D_{V,s_P}^{(i)}$ , so the model can be more targeted on those whose remaining Wasserstein distances are large. Also, since various non-exchangeable test distributions will weight calibration conformal score distribution differently in the inference phase, prediction set sizes can be adaptive to different test distributions. In contrast, without importance weighting, the model can only regularize $\sum_{i=1}^{k} W(P_V, D_V^{(i)})$ , and use the same quantile of $P_V$ to generate prediction sets for samples from all test distributions, resulting in the same prediction set size and lack of adaptiveness.
515
+
516
+ To further demonstrate the two reasons we mentioned above, we modify Wasserstein-regularization based on unweighted calibration conformal scores (i.e. $\sum_{i=1}^{k} W(P_V, D_V^{(i)}))$ during training. Also, the weighting operation in the prediction phase in Algorithm 1 is removed accordingly. This method is denoted as WR-CP(uw). We performed WR-CP(uw) on the sampled data from the 10 trials of each dataset at $\alpha = 0.2$ and compared its results with those of WR-CP.
517
+
518
+ The comparison is depicted in Figure 8. Although the average coverage gaps between WR-CP and WR-CP(uw) are quite similar, at $3.1\%$ and $2.3\%$ respectively, the average prediction set size for WR-CP is $28.0\%$ smaller than that of WR-CP(uw). This observation proves our first reason that importance weighting effectively reduces the Wasserstein distance between calibration and test conformal scores. By doing so, it mitigates the side effect of optimizing the regularized objective function in Eq. (19), which increases prediction residuals. Since larger residuals result in larger prediction sets, reducing residuals directly helps minimize prediction set size. Additionally, the standard deviations of the prediction set sizes observed in WR-CP(uw) are typically smaller than those found in WR-CP. This proves the second reason that removing importance weighting will make prediction sets less adaptive to different test distributions.
519
+
520
+ ![](images/cd7fd4d4ed1804724106105fd056262c50d7b79a6b8a76fd6f3cacf5d1cc276e.jpg)
521
+ Figure 8: Comparison between WR-CP and WR-CP(uw) at $\alpha = 0.2$ . Both methods were implemented using the same $\beta$ values of 4.5, 9, 9, 6, 8, and 20 across the datasets.
522
+
523
+ # D GEOMETRIC INTUITION OF $\eta$
524
+
525
+ To provide a geometric intuition of $\eta$ , we expand the definition of $\eta$ as
526
+
527
+ $$
528
+ \begin{array}{l} \eta = \max _ {x _ {1}, x _ {2} \in \mathcal {X}} \frac {\left| s _ {P} (x _ {1}) - s _ {Q} (x _ {2}) \right|}{\left| f _ {P} (x _ {1}) - f _ {Q} (x _ {2}) \right|} \\ = \max _ {x _ {1}, x _ {2} \in \mathcal {X}} \frac {\left| s \left(x _ {1} , f _ {P} \left(x _ {1}\right)\right) - s \left(x _ {2} , f _ {Q} \left(x _ {2}\right)\right) \right|}{\left| f _ {P} \left(x _ {1}\right) - f _ {Q} \left(x _ {2}\right) \right|} \tag {32} \\ = \max _ {x _ {1}, x _ {2} \in \mathcal {X}} \frac {| | h (x _ {1}) - f _ {P} (x _ {1}) | - | h (x _ {2}) - f _ {Q} (x _ {2}) | |}{| f _ {P} (x _ {1}) - f _ {Q} (x _ {2}) |}. \\ \end{array}
529
+ $$
530
+
531
+ We first simplify the definition by assuming $x_{1} = x_{2}$ , so the denominator is the absolute difference between two ground-truth mapping functions $f_{P}$ and $f_{Q}$ at $x_{1}$ , and the numerator is the absolute difference of the residuals of $f_{P}$ and $f_{Q}$ with a given model $h$ at $x_{1}$ . $\eta$ is the largest ratio between the two absolute differences. A small $\eta$ means even if $f_{P}$ and $f_{Q}$ differ significantly, $h$ results in similar prediction residuals on $f_{P}$ and $f_{Q}$ . When $x_{1} \neq x_{2}$ , $\eta$ is the largest ratio of the two absolute differences at two positions, $x_{1}$ and $x_{2}$ , so a small $\eta$ means that $h$ can lead to similar residuals when $f_{P}(x_{1})$ and $f_{Q}(x_{2})$ differ. The expanded definition above includes both $x_{1} = x_{2}$ and $x_{1} \neq x_{2}$ conditions and Figure 9 (a) and (b) present the two conditions, respectively. Intuitively, the residual difference caused by concept shift will be constrained by $\eta$ .
532
+
533
+ ![](images/d7291143fef4be56b3c1c8482f27813a3c09e4335bfa7cc068020f79ad89115a.jpg)
534
+ (a)
535
+
536
+ ![](images/190b01e7afd84db16e9e0bd3766e3c8983f79b1ada8cda9edf447e6ce26fbcd8.jpg)
537
+ (b)
538
+ Figure 9: Geometric intuition of $\eta$ when (a) $x_{1} = x_{2}$ and (b) $x_{1} \neq x_{2}$ : Intuitively, the residual difference caused by concept shift will be constrained by $\eta$ .
539
+
540
+ # E DISTRIBUTION ESTIMATION
541
+
542
+ # E.1 KERNEL DENSITY ESTIMATION
543
+
544
+ $\hat{P}_X$ and $\hat{D}_X^{(i)}$ for $i = 1,\dots,k$ are obtained by kernel density estimation (KDE), and based on the estimated distributions we calculate the likelihood ratio.
545
+
546
+ In our experiments, we applied the Gaussian kernel, which is a positive function of $x \in \mathcal{X} \subseteq \mathbb{R}^d$ given by
547
+
548
+ $$
549
+ \mathrm {K} (x, b) = \frac {1}{(\sqrt {2 \pi} b) ^ {d}} e ^ {- \frac {\| x \| ^ {2}}{2 b ^ {2}}}, \tag {33}
550
+ $$
551
+
552
+ where $\| \cdot \|$ is Euclidean distance and $b$ is bandwidth. Given this kernel form, the estimated probability density, denoted by $\hat{p}$ , at a position $x_{a}$ within a group of points $x_{1},\ldots ,x_{n}$ is
553
+
554
+ $$
555
+ \hat {p} \left(x _ {a}, \mathrm {K}\right) = \sum_ {i = 1} ^ {n} \mathrm {K} \left(x _ {a} - x _ {i}, b\right). \tag {34}
556
+ $$
557
+
558
+ To find the optimized bandwidth value of $\hat{P}_X$ and $\hat{D}_X^{(i)}$ for $i = 1,\dots,k$ on each dataset, we applied the grid search method with a bandwidth pool using scikit-learn package (Pedregosa et al., 2011). With the approximated marginal distribution densities, we can calculate the likelihood ratio to implement the weighting technique proposed by Tibshirani et al. (2019).
559
+
560
+ # E.2 POINT-WISE DISTRIBUTION ESTIMATION
561
+
562
+ $\hat{D}_V^{(i)}$ and $\hat{D}_{V,s_P}^{(i)}$ for $i = 1,\dots,k$ are estimated as discontinuous, point-wise distributions to ensure differentiability during training. Specifically, as $\hat{D}_V^{(i)}$ and $\hat{D}_{V,s_P}^{(i)}$ are conformal score distributions on real number set $\mathbb{R}$ , $W(\hat{D}_V^{(i)},\hat{D}_{V,s_P}^{(i)})$ is equal to area between their CDFs, as Eq. (28) shows. Hence, our focus is on estimating the CDFs of $\hat{D}_V^{(i)}$ and $\hat{D}_{V,s_P}^{(i)}$ for $i = 1,\dots,k$ .
563
+
564
+ For the details of point-wise distribution estimation, consider we have a $x_{1},\ldots ,x_{n}$ drawn from a probability measure $\mu$ in space $\mathcal{X}\subseteq \mathbb{R}$ , so the approximated CDF of $\mu$ is given by
565
+
566
+ $$
567
+ F _ {\hat {\mu}} (x) = \frac {1}{n} \sum_ {j = 1} ^ {n} \delta_ {x _ {i}} \mathbb {1} _ {x _ {i} < x}, \tag {35}
568
+ $$
569
+
570
+ where $\mathbb{1}$ is the indicator function and $\delta_{x_i}$ represents the point mass at $x_{i}$ (i.e., the distribution placing all mass at the value $x_{i}$ ). In other words, Eq. (35) counts the partition of samples that are smaller than $x$ . This point-wise estimation ensures that the Wasserstein-1 distance between the estimated distributions is differentiable.
571
+
572
+ # F SUPPLEMENTARY EXPERIMENTAL INSIGHTS
573
+
574
+ # F.1 DATASETS
575
+
576
+ The airfoil self-noise dataset from the UCI Machine Learning Repository (Brooks & Marcolini, 2014) was intentionally modified to introduce covariate shift and concept shift among them. It includes 1503 instances. The target variable is the scaled sound pressure level of NASA airfoils, and there are 5 features: log frequency, angle of attack, chord length, free-stream velocity, and log displacement thickness of the suction side. To introduce covariate shift, we divided the original dataset into three subsets based on the $33\%$ and $66\%$ quantiles of the first dimension feature, log frequency, and partially shuffled them. Therefore, $k = 3$ for this dataset. We further introduced concept shifts among the three subsets by modifying target values. With $\xi$ following a normal distribution $N(0,10)$ , for $y$ in the first set, $y+ = y / 1000 * \xi$ ; for $y$ in the second set, $y+ = y / \xi$ ; for $y$ in the third set, $y+ = \xi$ . With the modified data, we conducted sampling trials to generate 10 randomly sampled datasets.
577
+
578
+ The Seattle-loop dataset Cui et al. (2019), as well as the PeMSD4 and PeMSED8 datasets Guo et al. (2019), consist of sensor-observed traffic volume and speed data gathered in Seattle, San Francisco, and San Bernardino, respectively. The data was collected at 5-minute intervals. Our goal for each dataset is to forecast the traffic speed of a specific interested local road segment in the next time step
579
+
580
+ by utilizing the current traffic speed and volume data from both the local segment and its neighboring segments. Before sampling, we selected 10 segments of interest for each dataset randomly, setting $k = 10$ for them. There are natural joint distribution shifts present among these segments because of the varying local traffic patterns.
581
+
582
+ The U.S.-States and Japan-Prefectures datasets Deng et al. (2020) contain data on the number of patients infected with influenza-like illness (ILI) reported by the U.S. Department of Health and Human Services, Center for Disease Control and Prevention (CDC), and the Japan Infectious Diseases Weekly Report, respectively. The data in each dataset is structured based on the collection region. Our objective is to utilize the regional predictive features, including population, the increase in the number of infected patients observed in the current week, and the annual cumulative total of infections, to forecast the rise in infections for the following week in the corresponding region. We also randomly selected 10 regions for both datasets, so $k = 10$ . Due to the diverse regional epidemiological conditions, there are inherent joint distribution shifts among these regions.
583
+
584
+ For each dataset, we began by sampling $S_{XY}^{(i)}$ from each subset $i$ , for $i = 1, \dots, k$ , without replacement. After this step, we allocated the remaining elements within each subset for calibration and testing purposes. The parts intended for calibration across all subsets were then unified to form $S_{XY}^P$ . Lastly, to create diverse testing scenarios, we generated multiple test sets by randomly mixing the parts designated for testing from each subset with replacement. For each dataset, we conducted the sampling trial for 10 times, and calculated the mean and standard deviation of the results from these trials, as shown in Figure 3, Figure 4, and Figure 5. For efficiency, all CP methods were conducted as split conformal prediction.
585
+
586
+ We introduce a toy example to further illustrate that exchangeability does not hold. Consider we have two training distributions:
587
+
588
+ $$
589
+ D _ {X Y} ^ {(1)} = N \left([ 0, 0 ], \left[ \begin{array}{c c} 1 & 0. 7 \\ 0. 7 & 1 \end{array} \right]\right); D _ {X Y} ^ {(2)} = N \left([ 1, 1 ], \left[ \begin{array}{c c} 1 & - 0. 6 \\ - 0. 6 & 1 \end{array} \right]\right).
590
+ $$
591
+
592
+ A calibration distribution is a mixture of these two training distributions with known weights, such as a uniformly weighted mixture $(w_{1} = w_{2} = 0.5)$ . A test distribution is a mixture of $D_{XY}^{(1)}$ and $D_{XY}^{(2)}$ with unknown random weights. To visualize the non-exchangeability in Figure 10, we assume the unknown test distribution has weights of 0.2 for $D_{XY}^{(1)}$ and 0.8 for $D_{XY}^{(2)}$ .
593
+
594
+ ![](images/ec9913943f807a6cdeb8d66a1c9e1bdf9ee72e32d9d80d1938406c33c765ce2a.jpg)
595
+ Figure 10: Calibration and test samples are not exchangeable as they are from different distributions.
596
+
597
+ # F.2 SPEARMAN'S COEFFICIENT
598
+
599
+ We first provide the definition of Pearson coefficient.
600
+
601
+ Definition 5 (Pearson coefficient). With $n$ pairs of samples, $(x_{i},y_{i})$ for $i = 1,\dots,n$ , of two random variables $X$ and $Y$ , Pearson coefficient, $r_p$ , is calculated as the covariance of the samples divided by the product of their standard deviations. Formally, it is given by
602
+
603
+ $$
604
+ r _ {p} = \frac {\sum_ {i = 1} ^ {n} \left(x _ {i} - \bar {x}\right) \left(y _ {i} - \bar {y}\right)}{\sqrt {\sum_ {i = 1} ^ {n} \left(x _ {i} - \bar {x}\right) ^ {2}} \sqrt {\sum_ {i = 1} ^ {n} \left(y _ {i} - \bar {y}\right) ^ {2}}}, \tag {36}
605
+ $$
606
+
607
+ where $\overline{x}$ and $\overline{y}$ are the means of the samples of $X$ and $Y$ , respectively.
608
+
609
+ Based on Pearson coefficient, the definition of Spearman's coefficient is given as follows.
610
+
611
+ Definition 6 (Spearman's coefficient). With $n$ pairs of samples, $(x_{i},y_{i})$ for $i = 1,\dots,n$ , of two random variables $X$ and $Y$ , letting $r(\cdot)$ be the rank function (i.e., $r(x_1) = 3$ indicates that $x_{1}$ is the third largest sample among $x_{1},\ldots ,x_{n}$ ), Spearman's coefficient, $r_s$ , is defined as the Pearson coefficient between the ranked samples:
612
+
613
+ $$
614
+ r _ {s} = \frac {\sum_ {i = 1} ^ {n} \left(r \left(x _ {i}\right) - r (\bar {x})\right) \left(r \left(y _ {i}\right) - r (\bar {y})\right)}{\sqrt {\sum_ {i = 1} ^ {n} \left(r \left(x _ {i}\right) - r (\bar {x})\right) ^ {2}} \sqrt {\sum_ {i = 1} ^ {n} \left(r \left(y _ {i}\right) - r (\bar {y})\right) ^ {2}}}, \tag {37}
615
+ $$
616
+
617
+ where $\overline{x}$ and $\overline{y}$ are the means of the samples of $X$ and $Y$ , respectively.
618
+
619
+ We calculated Spearman's coefficient between each distance measure and the largest coverage gap in Section 6 to confirm that Wasserstein distance holds the strongest positive correlation compared with other distance measures.
620
+
621
+ # F.3 ADDITIONAL EXPERIMENT RESULTS OF SUBSECTION 6.4
622
+
623
+ In addition to the results shown in Figure 4, we present further experimental findings from Subsection 6.4 with $\alpha$ values of 0.1, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9 on Figure 11, 12, 13, 14, 15, 16, 17, and 18, respectively. Clearly, WR-CP demonstrates the ability to generate more tightly concentrated coverages near $1 - \alpha$ compared to vanilla CP and IW-CP. Additionally, it yields smaller prediction set sizes than the state-of-the-art method WC-CP. These figures also reveal a trend where as the $\alpha$ value increases, WR-CP requires a smaller $\beta$ to achieve acceptable coverages around $1 - \alpha$ , so the prediction set sizes produced by WR-CP are closer with those of vanilla CP and IW-CP, as evidenced by the results on the PeMSD4 in Figure 11 and Figure 18. This phenomenon could be attributed to the trade-off between conformal prediction accuracy and efficiency under joint distribution shift. The Wasserstein regularization term in Eq. (19) tends to prioritize aligning smaller conformal scores initially, as it reduces the Wasserstein penalty with a lesser increase in the empirical risk minimization term. Hence, as the hyperparameter $\beta$ increases, the model gradually aligns larger conformal scores from two different distributions, which will adversely impact the risk-driven term more. When considering a higher $\alpha$ value, the focus is on ensuring that the coverages on test data are close to the smaller $1 - \alpha$ , indicating the importance of aligning small conformal scores. Consequently, a high $\beta$ value is not necessary in this case, leading to smaller prediction set sizes being achieved.
624
+
625
+ # F.4 EXPERIMENT SETUPS IN ABLATION STUDY
626
+
627
+ To visualize a comprehensive and evenly-distributed set of optimal solutions on Pareto fronts, we utilized WR-CP with varying values of $\beta$ to produce the results depicted in Figure 5. As mentioned in Section 5, it is worth noting that when $\beta = 0$ , WR-CP reverts to IW-CP. The selected $\beta$ values for the results of Figure 5 are shown in Table 2.
628
+
629
+ Table 2: $\beta$ values of WR-CP in ablation study
630
+
631
+ <table><tr><td>Dataset</td><td>β values</td></tr><tr><td>Airfoil</td><td>1, 1.5, 2, 2.5, 3, 3.5, 4.5, 6, 8, 9, 13, 20.</td></tr><tr><td>PeMSD4</td><td>1, 1.5, 2, 2.5, 3, 5, 7, 9, 11, 15, 20.</td></tr><tr><td>PeMSD8</td><td>1, 1.5, 2, 2.5, 3, 4, 5, 7, 9, 17.</td></tr><tr><td>Seattle</td><td>1, 2, 3, 4, 4.5, 5, 5.5, 6, 7, 8, 10, 13, 15, 20.</td></tr><tr><td>U.S.</td><td>1, 1.5, 2, 2.5, 3, 5, 6, 8, 13.</td></tr><tr><td>Japan</td><td>1, 2, 3, 4, 6, 8, 10, 13, 20.</td></tr></table>
632
+
633
+ ![](images/684e17b6e8460804156857ae16ad1b53905f43d76b20ab83c95f7a8c56ecd79d.jpg)
634
+ Figure 11: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.1$ : The $\beta$ values for the WR-CP method are 9, 11, 9, 8, 13, and 20, respectively.
635
+
636
+ ![](images/dd6d96d02627f60764fe0244a328c7703e6108047599af8cd3263a9bf240bc77.jpg)
637
+ Figure 12: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.3$ : The $\beta$ values for the WR-CP method are 3, 5, 5, 5, 8, and 13, respectively.
638
+
639
+ ![](images/5f541ab67684d69f5b7ebf19db39a7b284dae59ee9a972a39c499493022e12f8.jpg)
640
+ Figure 13: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.4$ : The $\beta$ values for the WR-CP method are 3, 5, 5, 5, 8, and 13, respectively.
641
+
642
+ ![](images/cfc513c904c3b6ca92efd0373b7e0a19c65310d5753f9e4cb4c76670a2ea2130.jpg)
643
+ Figure 14: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.5$ : The $\beta$ values for the WR-CP method are 3, 5, 3, 5, 8, and 13, respectively.
644
+
645
+ ![](images/ba975033c68f712a377da0f968df2cf1d3e87af242cbc3e4b207009ee615e354.jpg)
646
+ Figure 15: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.6$ : The $\beta$ values for the WR-CP method are 3, 5, 3, 5, 8, and 13, respectively.
647
+
648
+ ![](images/6e86e787712e452465513caf528567494c25c8f5daf3c90501d481b35055a821.jpg)
649
+ Figure 16: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.7$ : The $\beta$ values for the WR-CP method are 2, 2, 2, 5, 8, and 10, respectively.
650
+
651
+ ![](images/c34dfee04964f812c03d8947bbd903a10e351778cb45df546212d234bf532d05.jpg)
652
+ Figure 17: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.8$ : The $\beta$ values for the WR-CP method are 2, 2, 2, 5, 5, and 10, respectively.
653
+
654
+ ![](images/ff21ae07e601afae12c3e0118f1896b7654a605ba23e1456ee513d9dea8dc3e0.jpg)
655
+ Figure 18: Coverages and set sizes of WR-CP and baselines with $\alpha = 0.9$ : The $\beta$ values for the WR-CP method are 2, 1, 1, 5, 2, and 6, respectively.
656
+
657
+ # G PREDICTION EFFICIENCY WITH COVERAGE GUARANTEE
658
+
659
+ Although Wasserstein-regularized loss in Eq. (19) offers a controllable trade-off with significantly improved prediction efficiency and a mild coverage loss, it is worth investigating if this efficiency can be achieved with a coverage guarantee. In this section, we first derive a coverage lower bound of WR-CP via the multi-source setup in Appendix G.1. Then, we show that the combination of WC-CP and the Wasserstein-regularized loss can not achieve small prediction sets with ensured coverage in Appendix G.2.
660
+
661
+ # G.1 COVERAGE GUARANTEE FROM MULTI-SOURCE SETUP
662
+
663
+ Under the setup of multi-source conformal prediction, with $\tau$ as the $1 - \alpha$ quantile of the weighted calibration conformal score distribution $Q_{V,s_P}$ , we can derive the coverage gap upper bound by
664
+
665
+ $$
666
+ \begin{array}{l} | F _ {Q _ {V, s _ {P}}} (\tau) - F _ {Q _ {V}} (\tau) | = \left| \sum_ {i = 1} ^ {k} w _ {i} F _ {D _ {V, s _ {P}} ^ {(i)}} (\tau) - \sum_ {i = 1} ^ {k} w _ {i} F _ {D _ {V} ^ {(i)}} (\tau) \right| \\ \leq \sum_ {i = 1} ^ {k} w _ {i} \left| F _ {D _ {V, s P} ^ {(i)}} (\tau) - F _ {D _ {V} ^ {(i)}} (\tau) \right| \tag {38} \\ \leq \sup _ {i \in \{1, \dots , k \}} | F _ {D _ {V, s _ {P}} ^ {(i)}} (\tau) - F _ {D _ {V} ^ {(i)}} (\tau) |. \\ \end{array}
667
+ $$
668
+
669
+ In other words, the coverage gap on a test distribution must be less or equal to the largest gap at $\tau$ among multiple training distributions. Denoting $\alpha_{D} = \sup_{i\in \{1,\dots ,k\}}|F_{D_{V,s_{P}}^{(i)}}(\tau) - F_{D_{V}^{(i)}}(\tau)|$ , we have a coverage guarantee $\operatorname*{Pr}(Y_{n + 1}\in X_{n + 1})\geq 1 - \alpha -\alpha_{D}$ .
670
+
671
+ The regularization term $\sum_{i=1}^{k} W(D_{V,s_P}^{(i)}, D_V^{(i)})$ in Eq. (19) can minimize $\alpha_D$ , and thus making $1 - \alpha - \alpha_D$ closer to the desired $1 - \alpha$ . It is important to highlight that $\alpha_D$ is adaptive to variations in test distribution $Q_V$ , as evident from Eq. (38). This adaptivity ensures that the lower bound dynamically adjusts to different $Q_V$ . To evaluate the prediction efficiency of WR-CP under this guarantee, we set $\alpha = 0.1$ and computed the corresponding $\alpha_D$ for various test distributions. Additionally, we calculated the coverage and prediction set size of WC-CP on each test distribution, using the corresponding guarantee at $1 - \alpha - \alpha_D$ for comparison.
672
+
673
+ ![](images/0563a0960f2941409a25246732d9acc48c34feeb09abfbb7bca7e43bd6ca3def.jpg)
674
+ Figure 19: Coverages and set sizes of WC-CP and WR-CP with coverage guarantee at $1 - \alpha -\alpha_{D}$
675
+
676
+ The experiment results are depicted in Figure 19, demonstrating improved prediction efficiency on the PeMSD4, PeMSD8, U.S.-States, and Japan-Prefectures datasets. However, the efficiency remains almost unchanged on the Seattle-loop dataset and even declines on the airfoil self-noise dataset. This phenomenon can be attributed to the regularization mechanism. While WR-CP enhances prediction efficiency by leveraging the calibration distribution to generate prediction sets, regularization inevitably increases prediction residuals, leading to larger prediction sets. These two opposing effects can interact differently depending on the dataset characteristics. When the efficiency gains outweigh the drawbacks of regularization, we observe reduced prediction set size. Conversely, in datasets like the Seattle-loop and airfoil self-noise, the benefits of regularization are outweighed by the increased prediction residuals, resulting in unchanged or diminished efficiency. The averaged prediction set size reduction across the six datasets is $26.9\%$ .
677
+
678
+ # G.2 POOR COMPATIBILITY BETWEEN WASSERSTEIN-REGULARIZED LOSS AND WC-CP
679
+
680
+ Since the WC-CP is a conservative post-hoc uncertainty quantification method but the proposed regularized loss in Eq. (19) is applied during training, one may consider applying WC-CP upon the model trained by the regularized loss to obtain guaranteed coverage. However, WC-CP and the model are not suitable for complementing each other. While regularization enhances the reliability of calibration distributions, the worst-case approach depends exclusively on the upper bound of $1 - \alpha$ test conformal score quantile, rendering it unable to benefit from regularization. In contrast, the WC-CP may result in larger prediction sets under this condition, as the regularization inevitably increases the prediction residuals, which in turn increases the upper bound of the test conformal score quantile. Experiment results in Figure 20 demonstrate the analysis, where WC-CP is the worst-case method based on a residual-driven model (same as the WC-CP method in Section 6.4), and Hybrid WC-WR represents applying WC-CP to a model trained by Eq. (19).
681
+
682
+ ![](images/2ddc1fb86374f288d123a1b562774d612ba4fb9589d25eead4068fa68a3d4be9.jpg)
683
+ Figure 20: Coverages and set sizes of WC-CP and Hybrid WC-WR with coverage guarantee $1 - \alpha = 0.9$ .
684
+
685
+ # H LIMITATIONS
686
+
687
+ # H.1 SUSCEPTIBILITY TO DENSITY ESTIMATION ERRORS
688
+
689
+ Given that Wasserstein regularization relies on importance-weighted conformal scores, its performance is greatly influenced by the accuracy of the estimated likelihood ratio obtained through KDE. Inaccurate estimation can significantly impact the effectiveness of WR-CP. For instance, in Figure 4, WR-CP yields larger prediction set sizes with less concentrated coverages on the airfoil self-noise dataset compared to other datasets. This can be attributed to the airfoil self-noise dataset having the highest feature dimension (5) and the smallest size of the sampled $S_{XY}^{P}$ (500). These challenges in KDE lead to suboptimal performance of WR-CP on the airfoil self-noise dataset when compared to its performance on others.
690
+
691
+ The main reason for KDE error is numerical instability, which can arise from several factors. A poor choice of kernel is a critical contributor; for instance, kernels with sharp edges or discontinuities, such as rectangular or triangular kernels, can result in jagged density estimates and amplify errors near boundaries. Fat-tailed kernels, such as the Cauchy kernel, may assign excessive weight to distant data points, leading to inaccuracies in density estimates and numerical precision challenges. Additionally, the lack of feature normalization can exacerbate the effects of extreme values, skewing the density estimation process and reducing computational stability. Lastly, inappropriate bandwidth selection, either too small (overfitting) or too large (underfitting), can disrupt the balance between bias and variance, further contributing to instability in the estimation.
692
+
693
+ In our work, we first adopted the Gaussian kernel, valued for its smoothness and numerical stability. To mitigate the influence of extreme values, we applied feature normalization, ensuring a more stable density estimation process. Additionally, we conducted a comprehensive grid search to fine-tune the bandwidth, achieving an optimal balance between bias and variance for robust and accurate results. The bandwidth candidates were selected from a logarithmically spaced range between $10^{-2}$ and $10^{0.5}$ , consisting of 20 evenly distributed values on a logarithmic scale.
694
+
695
+ # H.2 COMPUTATIONAL CHALLENGES IN KDE
696
+
697
+ We applied a grid search approach to identify the optimal bandwidth for KDE, which ensures an effective balance between bias and variance in density estimation. However, this method often involves extensive computational effort, particularly when working with high-dimensional datasets, as it requires repeated calculations over a range of bandwidth values. To address this challenge, Bernacchia-Pigolotti KDE (Bernacchia & Pigolotti, 2011) introduces an innovative framework that combines a Fourier-based filter with a systematic approach for simultaneously determining both the kernel shape and bandwidth. This method not only reduces subjectivity in kernel selection but also offers a more efficient computational pathway. Building on this foundation, FastKDE (O'Brien et al., 2016) adapts and extends the Bernacchia-Pigolotti approach for high-dimensional scenarios, incorporating optimizations that significantly improve computational speed and scalability. These advancements represent promising directions for mitigating the computational overhead in our own work, where similar strategies could be leveraged to streamline the bandwidth selection process and enhance the overall efficiency of KDE in complex datasets.
698
+
699
+ # H.3 OTHER CHOICES OF THE CALIBRATION DISTRIBUTION
700
+
701
+ In the experiments conducted in Section 6, we specifically examine the scenario where the calibration data follows a mixture distribution of $D_{XY}^{(i)}$ for $i = 1,\dots,k$ with equal weights. However, this may not always be the case in real-world situations. Given that the calibration distribution plays a crucial role in determining the difficulty of minimizing Eq. (19) during training, it is valuable to investigate the performance of WR-CP with a calibration distribution different from a mixture of training distributions.
2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3b41b418e97dcc2a203cada025c9d71f23b966084e594e6b9649eb6946dbdf2
3
+ size 1345734
2025/Wasserstein-Regularized Conformal Prediction under General Distribution Shift/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/e95cc24c-c30e-4b66-8854-56fbc828fdb9_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/e95cc24c-c30e-4b66-8854-56fbc828fdb9_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/e95cc24c-c30e-4b66-8854-56fbc828fdb9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30d911432bac2d805cdfb5c21903e9d4b9e06c7c00bc619f648beac41f4f1f92
3
+ size 1000411
2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/full.md ADDED
@@ -0,0 +1,400 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WATCH LESS, DO MORE: IMPLICIT SKILL DISCOVERY FOR VIDEO-CONDITIONED POLICY
2
+
3
+ Jiangxing Wang
4
+
5
+ School of Computer Science
6
+
7
+ Peking University
8
+
9
+ jiangxiw@stu.pku.edu.cn
10
+
11
+ Zongqing Lu*
12
+
13
+ School of Computer Science
14
+
15
+ Peking University, BAAI
16
+
17
+ zongqing.lu@pku.edu.cn
18
+
19
+ # ABSTRACT
20
+
21
+ In this paper, we study the problem of video-conditioned policy learning. While previous works mostly focus on learning policies that perform a single skill specified by the given video, we take a step further and aim to learn a policy that can perform multiple skills according to the given video, and generalize to unseen videos by recombining these skills. To solve this problem, we propose our algorithm, Watch-Less-Do-More, an information bottleneck-based imitation learning framework for implicit skill discovery and video-conditioned policy learning. In our method, an information bottleneck objective is employed to control the information contained in the video representation, ensuring that it only encodes information relevant to the current skill (Watch-Less). By discovering potential skills from training videos, the learned policy is able to recombine them and generalize to unseen videos to achieve compositional generalization (Do-More). To evaluate our method, we perform extensive experiments in various environments and show that our algorithm substantially outperforms baselines (up to $2\mathbf{x}$ ) in terms of compositional generalization ability.
22
+
23
+ # 1 INTRODUCTION
24
+
25
+ As large language models (LLMs) have demonstrated remarkable zero-shot and few-shot generalization abilities (Brown et al., 2020; Ouyang et al., 2022), the research focus of decision-making policies has also shifted from addressing a specific task, such as mastering an environment via reinforcement learning (Sutton, 2018) or replicating a dataset via imitation learning (Hussein et al., 2017), to completing diverse tasks based on given instructions. These instructions can be treated as goals for the decision-making models, and encompass modalities such as text (Nair et al., 2022; Carta et al., 2023), goal image (Yadav et al., 2023b;a), or future state (Cui et al., 2022; Lee et al., 2024). To achieve such goal-conditioned policies, a variety of methods have been proposed and achieved great success across multiple domains (Liu et al., 2022). However, the aforementioned goal specifications often overlook dynamic information, such as the ordering of task completion or the method of task completion (if there are many). In contrast, video offers a natural way to represent these details, thereby leading to a line of research exploring video-conditioned policy learning (Eze & Crick, 2024b).
26
+
27
+ Existing methods for video-conditioned policy learning have been applied to various scenarios, including robotic manipulation (Chane-Sane et al., 2023; Shin et al., 2023; Jiang et al., 2023), navigation (Zhou et al., 2024), open-world agent (Cai et al., 2024), and autonomous driving (Shin et al., 2024). Taking different videos as input, the learned policy can be deployed to perform different skills to solve corresponding tasks. However, these methods often consider only the video demonstration of a single task (Chane-Sane et al., 2023) and only the object-level generalization (Jiang et al., 2023). In real-world applications, we often want the learned policy to perform a set of different skills to achieve a combination of multiple tasks.
28
+
29
+ When the video demonstration of a task combination is given, an ideal policy should directly perform skills as demonstrated in the given video. To train such a policy, researchers have explored
30
+
31
+ skill-based imitation learning methods (Xu et al., 2023; Wang et al., 2023; Shin et al., 2023; 2024). However, these methods often require explicit video segmentation annotations, or videos of another embodiment to train a skill-based policy, which greatly increases the difficulty of the data collection process. Therefore, in this paper, we consider whether it is possible to learn a video-conditioned policy that can perform multiple skills without these requirements. Moreover, as we consider videos of different task combinations, we also expect the learned policy to achieve compositional generalization (Lin et al., 2023), that is, it can still perform well on task combinations that have not been during training.
32
+
33
+ To fulfill such an expectation, we propose our algorithm, Watch-Less-Do-More (WL-DM), an information bottleneck-based imitation learning framework for implicit skill discovery and video-conditioned policy learning. For a video-conditioned policy, the given video can be considered as a sequence of tasks. As the policy can only work on one task at a time, it should be able to perform well by focusing only on the current task, instead of the entire task sequence. Based on this intuition, WL-DM employs the information bottleneck method (Tishby et al., 2000) to control the information contained in the video representation. This is accomplished by 1) minimizing the mutual information (Cover, 1999) between video and its representation to reduce the information contained in the video representation and 2) maximizing the mutual information between video representation and the current skill to preserve enough information related to the current task. To better understand the effect of this information bottleneck method, we further build a theoretical connection between the proposed method and the intuition behind it. Using this method, WL-DM makes the learned policy only to consider the current task, which achieves the implicit video segmentation without requiring explicit video segmentation annotations. The advantage of considering only the current task can be related to the compositional generalization ability. When an unseen video is given, the video-conditioned policy learned by WL-DM can implicitly decompose the unseen video into seen tasks, and perform corresponding skills, thus facilitating the compositional generalization ability of the learned video-conditioned policy. To further validate our algorithm, we propose a practical implementation of our method and conduct various empirical evaluations across diverse environments. The experimental results indicate that WL-DM achieves substantially better compositional generalization ability than baselines, demonstrating the effectiveness of our method.
34
+
35
+ Our contributions can be summarized as follows:
36
+
37
+ - We propose our method, Watch-Less-Do-More (WL-DM), an information bottleneck-based imitation learning framework for implicit skill discovery and video-conditioned policy learning, where two different mutual information terms work together to ensure the video representation contains only information related to the current task.
38
+ - The intuition behind WL-DM is that the optimal policy should behave similarly when conditioned on all tasks and when conditioned on only the current task. To better explain our method, we further build a theoretical connection between WL-DM and this intuition.
39
+ - We propose a practical implementation of our algorithm and perform empirical evaluations in Frank Kitchen (Gupta et al., 2020) and Meta world (Yu et al., 2020) to demonstrate the effectiveness of WL-DM. The experimental results indicate that WL-DM achieves (up to $2\mathbf{x}$ ) better compositional generalization ability compared to baselines.
40
+
41
+ # 2 RELATED WORK
42
+
43
+ # 2.1 LEARNING FROM VIDEOS
44
+
45
+ Using massive Internet data to train language models has been proven to be successful and has resulted in a trend of research on large language models (Brown et al., 2020; Touvron et al., 2023). Inspired by this success, researchers have begun to pay attention to another type of data wildly available on the Internet, video data, and produced a series of studies on learning from videos (McCarthy et al., 2024; Eze & Crick, 2024a). For decision-making models, video data can be used in various ways, such as reward function learning (Escontrela et al., 2023; Sermanet et al., 2018; Chen et al., 2021a), dynamic model learning (Baker et al., 2022), representation learning (Nair et al., 2023), and policy learning (Jang et al., 2022; Jiang et al., 2023; Chane-Sane et al., 2023; Shin et al., 2023; 2024). Our paper belongs to the last category, that is, using video demonstrations as instructions to learn a video-conditioned policy. It is worth noting that previous work in this category often focuses
46
+
47
+ only on demonstration videos containing a single task (Chane-Sane et al., 2023), or requires aligned data of other modalities (Jang et al., 2022; Shin et al., 2023; 2024). This can be attributed to the lack of clear goal labels in demonstration videos (McCarthy et al., 2024). Therefore, when dealing with videos containing multiple tasks, we often need to introduce information in other modalities to provide segmentation annotations for the video, to distinguish the tasks to be completed at each stage (Shin et al., 2023; 2024). Unlike previous work, in this paper, we attempt to directly learn a video-conditioned policy capable of handling videos containing multiple tasks, without introducing additional segmentation annotations.
48
+
49
+ # 2.2 ONE-SHOT IMITATION LEARNING
50
+
51
+ One-shot imitation learning was originally introduced in Duan et al. (2017), where the goal of this problem is to learn a policy that can quickly adapt to a new task given a single demonstration. For one-shot imitation learning, we can achieve it through different learning methods such as meta-learning (Duan et al., 2017; Finn et al., 2017), semi-supervised learning (Wu et al., 2024), and imitation learning (Jang et al., 2022; Cui et al., 2022; Jiang et al., 2023). Specifically, our method falls into the last category: we assume the existence of an imitation learning dataset paired with video demonstrations, such that we can use this dataset to train a video-conditioned policy.
52
+
53
+ One-shot demonstrations can be presented in various formats, such as trajectories (Cui et al., 2022; Lee et al., 2024), videos (Dasari & Gupta, 2021; Jain et al., 2024; Wang et al., 2023; Xu et al., 2023; Chane-Sane et al., 2023), multimodal information (Jiang et al., 2023; Shin et al., 2023; 2024), etc. In this paper, we consider adapting to new tasks through video demonstrations, that is, one-shot video imitation learning. In previous work, video demonstrations often only include a single task, and the adaptation to new tasks mainly focuses on differences at the embodiment and object level (locations, textures, etc.) (Dasari & Gupta, 2021; Mandi et al., 2022; Chane-Sane et al., 2023). Unlike these studies, we consider video demonstrations containing multiple tasks and focus on adaptation at the level of task combination. For this setting, previous work generally assumes the existence of data corresponding to another embodiment (Wang et al., 2023; Xu et al., 2023) or assumes information in other modalities to provide video segmentation annotations (Shin et al., 2023; 2024). Unlike these works, we do not assume additional data and learn a video-conditioned policy that can finish multiple tasks solely through the information contained in the videos.
54
+
55
+ # 2.3 COMPOSITIONAL GENERALIZATION
56
+
57
+ Due to the compositional nature of natural language, most previous work considers the compositional generalization problem over language instructions. For example, Oh et al. (2017) proposed a method based on hierarchical reinforcement learning that enables the policy to generalize to unseen command combinations and longer command sequences at test time. Stengel-Eskin et al. (2022) combined the transformer model and the masking mechanism to obtain generalization over object combinations. The attention mechanism for compositional generalization was further investigated by Spilsbury & Ilin (2022), and a method utilizing sparse factored attention for goal identification was proposed. Modular architecture is another way to induce compositional generalization. Carvalho et al. (2023) proposed modular successor features to enhance the compositional generalization ability, and Logeswaran et al. (2023) directly considered an additive decomposition of the state-action value function to obtain the generalization ability over language instructions.
58
+
59
+ Unlike these studies, we consider the generalization across different task combinations based on video demonstrations. During training, we only have access to a subset of task combinations and their corresponding video demonstrations. Our goal is to enable the policy to decompose different tasks from the videos and acquire skills to solve these tasks. At test time, the policy is expected to reproduce an unseen video demonstration by combining a set of skills learned in the training set. This setting has been studied by Wang et al. (2023); Xu et al. (2023); Shin et al. (2023; 2024). However, Wang et al. (2023); Xu et al. (2023) focused on the cross-embediment scenario, thus requiring video data from another embodiment, and Shin et al. (2023; 2024) required language information to provide segmentation annotations for videos. Unlike them, our method incorporates an information bottleneck-based objective to achieve implicit video segmentation and skill discovery, without the need for other sources of information.
60
+
61
+ # 3 PROBLEM FORMULATION
62
+
63
+ In this paper, we consider the video-conditioned policy learning problem. This problem can be formulated as a special case of the goal-conditioned Markov Decision Process (MDP) (Nasiriany et al., 2019) and defined by a tuple $\langle S, \mathcal{G}, \mathcal{A}, P, R, \rho_0, \gamma \rangle$ . Similar to the general MDP, $S$ is the set of states, $\mathcal{A}$ is the set of actions, $P(s_{t+1}|s_t, a_t)$ is the transition probability, $\rho_0$ is the initial state distribution and $\gamma$ is the discount factor. Additionally, we have $\mathcal{G}$ as the set of goals, which will also affect the reward function $R(s_t, a_t, g)$ . For a goal-conditioned policy $\pi(a_t|s_t, g)$ with a given goal $g$ , we want it to maximize the following objective:
64
+
65
+ $$
66
+ \mathcal {J} (\pi) = \mathbb {E} _ {s _ {0} \sim \rho_ {0}, a \sim \pi , s ^ {\prime} \sim P} [ \sum_ {t} \gamma^ {t} r (s _ {t}, a _ {t}, g) ].
67
+ $$
68
+
69
+ As we focus on the video-conditioned policy learning, we assume our goals to be videos $\mathcal{G} = \mathcal{V}$ , such that a goal-conditioned policy $\pi(a_t|s_t,g)$ becomes a video-conditioned policy $\pi(a_t|s_t,v)$ . Moreover, we consider the case where each video $v = (k_0,\dots k_N)$ contains multiple tasks $\mathrm{k} \in \mathcal{T}$ , where $N$ is the number of tasks and $\mathcal{T}$ is the set of all possible tasks. To evaluate the compositional generalization ability of the video-conditioned policy, we assume two video sets $\mathcal{V}_{\text{train}}$ and $\mathcal{V}_{\text{test}}$ , such that there is no overlapping between the train video set and test video set $\mathcal{V}_{\text{train}} \cap \mathcal{V}_{\text{test}} = \emptyset$ and both video sets contain all possible tasks $\bigcup_{v \in \mathcal{V}_{\text{train}}, k \in v} \mathrm{k} = \bigcup_{v \in \mathcal{V}_{\text{test}}, k \in v} \mathrm{k} = \mathcal{T}$ . The video-conditioned policy will be trained in $\mathcal{V}_{\text{train}}$ to maximize $\mathbb{E}_{v \sim \mathcal{P}_{\text{train}}} \mathcal{J}(\pi)$ and will be tested in $\mathcal{V}_{\text{test}}$ in terms of $\mathbb{E}_{v \sim \mathcal{P}_{\text{test}}} \mathcal{J}(\pi)$ , where $\mathcal{P}_{\text{train}}$ and $\mathcal{P}_{\text{test}}$ are uniform distributions across $\mathcal{V}_{\text{train}}$ and $\mathcal{V}_{\text{test}}$ respectively.
70
+
71
+ # 4 METHOD
72
+
73
+ In this section, we introduce our method, Watch-Less-Do-More (WL-DM). The intuition behind our method is that we want the video-conditioned policy to make decisions relying not on the entire video, but only on information related to the current task, thereby achieving implicit video segmentation and skill discovery. To achieve this, we propose an information bottleneck-based objective and theoretically establish the connection between this objective and our intuition. By decomposing training videos into a combination of different skills, the video-conditioned policy can handle unseen videos by recombining these skills to complete the required task combinations demonstrated in the unseen video.
74
+
75
+ # 4.1 INTUITION: FOCUSING ON THE CURRENT TASK
76
+
77
+ As formulated in Section 3, we assume that each video $v$ contains $N$ tasks $[\mathrm{k}_0,\dots ,\mathrm{k}_N]$ that need to be completed and the completion of these tasks is independent. In this case, we further assume a training set $\mathcal{D} = \{\tau_i,v_i\}$ , where $\tau_{i} = (s_{0},a_{0}^{*},\dots ,s_{T},a_{T}^{*})_{i}$ is the expert trajectory corresponding to the video $v_{i} = (f_{0},\dots ,f_{T})$ and $f_{i}$ is the video frame at each timestep. Given such a dataset, we can easily learn a video-conditioned policy $\pi (a_t|s_t,v)$ through imitation learning (Hussein et al., 2017) that can complete different task combinations given different videos, at least within the coverage of the training set. For example, the policy can be trained via the following behavior-cloning loss:
78
+
79
+ $$
80
+ \begin{array}{l} \mathcal {L} _ {\mathrm {B C}} (\theta , \phi) = - \mathbb {E} _ {s _ {t}, a _ {t} ^ {*}, v \sim \mathcal {D}} \left[ \log \pi \left(a _ {t} ^ {*} \mid s _ {t}, v\right) \right] \\ = - \mathbb {E} _ {s _ {t}, a _ {t} ^ {*}, v \sim \mathcal {D}} \left[ \log \mathrm {f} _ {\theta} \left(a _ {t} ^ {*} \mid s _ {t}, \mathrm {g} _ {\phi} (v)\right) \right], \tag {1} \\ \end{array}
81
+ $$
82
+
83
+ where $\mathrm{g}_{\phi}$ is the video encoder and $\mathrm{f}_{\theta}$ is the action decoder.
84
+
85
+ A potential problem with this training method is that, when the size of our training set is limited, the learned policy can easily overfit (Ying, 2019) to videos in the training set. This problem causes the learned policy to focus too much on the details of these videos to distinguish them completely, while ignoring the fact that these videos are composed through elements of the same task set. In such a case, when an unseen video is given, i.e., an unseen combination of tasks, the performance of the learned policy may decrease dramatically due to the overfitting issue. To address this problem, we need to focus on the fact that all videos are composed through elements of the same task set $\mathcal{T}$ . Even for those unseen videos, although the corresponding task combinations are not included in the
86
+
87
+ training set, each task that constitutes them has already been covered in the training set. Therefore, if we can decompose videos into individual tasks and train the policy based on the decomposed tasks, such that $\pi(a_t|s_t,v) = \pi(a_t|s_t,\mathrm{v}_{\mathrm{cur}})$ , where $\mathrm{v}_{\mathrm{cur}}$ is the video segment corresponding to $\mathrm{k}_{\mathrm{cur}}$ and $v = (\mathrm{v}_{\mathrm{cur}},\mathrm{v}_{\mathrm{other}})$ , the policy can then handle unseen videos as all the skills demonstrated in the videos have been covered and trained in the training set, which is commonly known as compositional generalization (Lin et al., 2023). However, such a task-level video segmentation annotation could be inaccessible in many cases. In this paper, we do not assume this kind of annotations as in previous work (Shin et al., 2023; 2024). To achieve a similar effect, we propose an information bottleneck-based method, allowing the policy to implicitly decompose demonstration videos, enabling it to rely only on the information related to the current skill when making decisions, thereby achieving the compositional generalization ability.
88
+
89
+ # 4.2 INFORMATION BOTTLENECK FOR VIDEO-CONDITIONED POLICY LEARNING
90
+
91
+ As described in Section 4.1, when all tasks can be completed independently, a video-conditioned policy $\pi (a_{t}|s_{t},v) = \pi (a_{t}|s_{t},\mathrm{v}_{\mathrm{cur}})$ can achieve the compositional generalization ability. However, a video contains information about not only the current task $\mathbf{k}_{\mathrm{cur}}$ but also other tasks $\mathbf{k}_{\mathrm{other}}$ . Therefore, we need an additional objective to train the video encoder $\mathbf{g}_{\phi}$ , such that it produces a similar representation for $v$ and $\mathrm{v}_{\mathrm{cur}}$ , and we can then ensure $\pi (a_{t}|s_{t},v) = \pi (a_{t}|s_{t},\mathrm{v}_{\mathrm{cur}})$ . To achieve this, we need to reduce the mutual information between video representations $h_v$ and the video segments of other tasks $\mathrm{v}_{\mathrm{other}}$ , and the reason can be seen from the following theorem:
92
+
93
+ Theorem 1. If we have $\mathrm{MI}(h_v; \mathrm{v}_{\text{other}}|s, \mathrm{v}_{\text{cur}}) = 0$ , then $D_{\mathrm{KL}}\big(\pi(a|s, v) || \pi(a|s, \mathrm{v}_{\text{cur}})\big) = 0$ for all state-video pairs $(s, v) \in S \times \mathcal{V}$ with non-zero probability $P(s, v) > 0$ .
94
+
95
+ Proof. See Appendix B.
96
+
97
+ ![](images/ba00fe24a74deee182c800223ec6809e6dd9769990f07e5fe614687c204d4450.jpg)
98
+
99
+ This theorem suggests the necessity of reducing the information of other tasks contained in the video representation. However, since we do not assume any video segmentation annotation, we cannot directly obtain segments corresponding to the current task and other tasks from the video, and therefore cannot directly manipulate the mutual information. Hence, we use a constructive method to manipulate the information in the video representation indirectly. Specifically, we first minimize the mutual information between the video representation $h_v$ and the entire video $v$ to minimize the amount of information contained in the video representation. At the same time, we maximize the mutual information between the video representation and some approximation of the current skill (which will be discussed later). Since different skills are required for performing different tasks, we can in this way indirectly ensure that the video representation still retains a certain amount of information about the current task. Putting these two terms together, we can construct the following objective, which is often referred to as the information bottleneck (Tishby et al., 2000):
100
+
101
+ $$
102
+ \mathcal {L} _ {I B} = \operatorname {M I} \left(h _ {v}; v | s\right) - \alpha \operatorname {M I} \left(h _ {v}; z | s\right), \tag {2}
103
+ $$
104
+
105
+ where $z$ is an approximated representation of the current skill, and $\alpha$ is the coefficient for the trade-off between two mutual information terms. As discussed in Tishby & Zaslavsky (2015), the information bottleneck is often used to learn a compact representation, which in our case is to dismiss the irrelevant part $\mathrm{k}_{\mathrm{other}}$ and retain the relevant part $\mathrm{k}_{\mathrm{cur}}$ . In the following two sections, we discuss how to compute this objective in practice.
106
+
107
+ # 4.3 MINIMIZING MUTUAL INFORMATION WITH VIDEO
108
+
109
+ The first term in Equation (2) is to minimize the mutual information between video representation $h_v$ and the entire video $v$ . By expanding this term, we have:
110
+
111
+ $$
112
+ \operatorname {M I} \left(h _ {v}; v \mid s\right) = \mathbb {E} _ {P (s)} \mathbb {E} _ {P (v)} \left[ D _ {\mathrm {K L}} \left(\mathrm {g} _ {\phi} \left(h _ {v} \mid s, v\right) \mid \mid P \left(h _ {v} \mid s\right)\right) \right], \tag {3}
113
+ $$
114
+
115
+ where $P(s)$ and $P(v)$ represent the state and video distribution, respectively. $P(h_v|s)$ is a marginal distribution $P(h_v|s) = \mathbb{E}_{P(v)}\mathrm{g}_{\phi}(h_v|s,v)$ . As estimating this marginal distribution could be intractable in practice, previous work (Goyal et al., 2018; Eysenbach et al., 2021) commonly approximates it with some prior $g(h_v|s)$ . As we can see from Equation (3), the goal of this objective is to minimize the distance between the video representation produced by the video encoder
116
+
117
+ ![](images/6227782011a2a9aa96af4305eaa3de503ed9600abaab184d06cb8920c9688461.jpg)
118
+ Figure 1: Overall Framework of WL-DM. We introduce an information bottleneck-based objective to achieve implicit video segmentation and skill discovery. Blocks with different colors represent different tasks. MHA stands for Multi-head Attention.
119
+
120
+ $h_v = \mathrm{g}_{\phi}(h_v|s,v)$ and some prior $g(h_v|s)$ that does not consider video $v$ at all. As shown later in the experiment, such a target for distance minimization is undesirable as it induces too much loss of video information. To solve this problem, we need to find a better alternative for $g(h_v|s)$ such that the loss of video information can be controlled at a proper level.
121
+
122
+ Recall the intuition in Section 4.1, we want the representation to be only related to the video segment of the current task $\mathrm{v}_{\mathrm{cur}}$ . Although we cannot access the precise $\mathrm{v}_{\mathrm{cur}}$ , we do have access to the video $v = (f_0,\dots ,f_T)$ , which allows us to approximate $\mathrm{v}_{\mathrm{cur}}$ using a future video segment $\tilde{\mathrm{v}}_{\mathrm{cur}} = (f_t,f_{t + 1},\dots f_{t + L})$ for state $s_t$ , where $L$ is a randomly sampled window size. Therefore, we can use $\tilde{\mathrm{v}}_{\mathrm{cur}}$ as the input and get a prior video encoder $\mathbf{g}_{\tilde{\phi}}^{\mathrm{prior}}(h_v|s,\tilde{\mathrm{v}}_{\mathrm{cur}})$ , which leads to the minimization of the following equation:
123
+
124
+ $$
125
+ \mathbb {E} _ {P (s)} \mathbb {E} _ {P (v)} \left[ D _ {\mathrm {K L}} \left(\mathrm {g} _ {\phi} \left(h _ {v} | s, v\right) \mid \mid \mathrm {g} _ {\tilde {\phi}} ^ {\text {p r i o r}} \left(h _ {v} | s, \tilde {\mathrm {v}} _ {\text {c u r}}\right)\right) \right]. \tag {4}
126
+ $$
127
+
128
+ With this prior encoder, we can get the final objective for mutual information minimization:
129
+
130
+ $$
131
+ \mathcal {L} _ {\mathrm {M I} ^ {-}} = \mathbb {E} _ {s, v \sim D} \left[ D _ {\mathrm {K L}} \left(\mathrm {g} _ {\phi} \left(h _ {v} | s, v\right) \mid \mid \mathrm {g} _ {\tilde {\phi}} ^ {\text {p r i o r}} \left(h _ {v} | s, \tilde {\mathrm {v}} _ {\text {c u r}}\right)\right) \right]. \tag {5}
132
+ $$
133
+
134
+ In addition to Equation (5), the prior encoder $\mathrm{g}_{\tilde{\phi}}^{\mathrm{prior}}$ is also trained via behavior cloning similar to Equation (1) with another action decoder $\mathrm{f}_{\tilde{\theta}}$ attached after it.
135
+
136
+ # 4.4 MAXIMIZING MUTUAL INFORMATION WITH SKILL APPROXIMATION
137
+
138
+ Another term in Equation (2) is to maximize the mutual information between the video representation $h_v$ and some skill approximation $z$ . We follow Yuan et al. (2024) and use the short-term behavior $z = (a_t, a_{t+1}, \dots, a_{t+M})$ as the representation of skills for state $s_t$ , where $M$ is a randomly sampled window size. To enhance the level of abstraction of the skill representation, we further propose to first cluster all actions in the training dataset $\mathcal{D}$ and then use the cluster id $x_t$ of each action to improve the skill representation, such that $z = (x_t, x_{t+1}, \dots, x_{t+M})$ . As the mutual information is to measure the dependency between two variables, to maximize $\mathrm{MI}(h_v; z)$ , we can simply maximize $\log P(z|h_v)$ . As we have $z = (x_t, x_{t+1}, \dots, x_{t+M})$ , similar to Yuan et al. (2024), we can decompose the above maximization into each timestep and get the final objective for mutual information maximization:
139
+
140
+ $$
141
+ \mathcal {L} _ {\mathrm {M I} ^ {+}} = - \mathbb {E} _ {s _ {t}, x _ {t}, v \sim \mathcal {D}} \left[ \log \mathrm {f} _ {\psi} ^ {\text {s k i l l}} \left(x _ {t} \mid s _ {t}, \mathrm {g} _ {\phi} (v)\right) \right], \tag {6}
142
+ $$
143
+
144
+ where we introduce the skill decoder $\mathrm{f}_{\psi}^{\mathrm{skill}}$ to enhance the dependency between $h_t$ and $z$ .
145
+
146
+ # 4.5 SUMMARY
147
+
148
+ Putting Equations (1), (5) and (6) together, we can now have the total loss for WL-DM:
149
+
150
+ $$
151
+ \mathcal {L} _ {\mathrm {W L - D M}} = \mathcal {L} _ {\mathrm {B C}} + \alpha_ {1} \mathcal {L} _ {\mathrm {M I ^ {-}}} + \alpha_ {2} \mathcal {L} _ {\mathrm {M I ^ {+}}},
152
+ $$
153
+
154
+ where we have two coefficients $\alpha_{1}$ and $\alpha_{2}$ to balance the scale of these three terms. The overall framework of our algorithm is illustrated in Figure 1. We use multiple self-attention layers as the encoder $g_{\phi}$ to process video tokens and state tokens and then use the action decoder $f_{\theta}$ to predict action labels $a_{t}^{*}$ . The joint optimization of $\mathcal{L}_{\mathrm{MI^{-}}}$ and $\mathcal{L}_{\mathrm{MI^{+}}}$ ensures the video representation contains only information related to the current task. The pseudocode of our algorithm is summarized in Algorithm 1. It is worth noting that the skill decoder $f_{\psi}^{\mathrm{skill}}$ , the prior video encoder $g_{\tilde{\phi}}^{\mathrm{prior}}$ , and the prior action decoder $f_{\tilde{\theta}}$ will only be used during training, we will keep only the video encoder $g_{\phi}$ and the action decoder $f_{\theta}$ for execution.
155
+
156
+ Algorithm 1 WL-DM
157
+ 1: Initialize video encoder $\mathrm{g}_{\phi}$ , action decoder $\mathrm{f}_{\theta}$ and skill decoder $\mathrm{f}_{\psi}^{\mathrm{skill}}$
158
+ 2: Initialize prior video encoder $\mathrm{g}_{\tilde{\phi}}^{\mathrm{prior}}$ and prior action decoder $\mathrm{f}_{\tilde{\theta}}$
159
+ 3: Initialize training dataset $\mathcal{D}$
160
+ 4: for $i = 1$ to $I$ do
161
+ 5: Sample data $(s_t, a_t^*, v)$ from $\mathcal{D}$
162
+ 6: Construct approximation of current video segment $\mathrm{v}_{\mathrm{cur}}$
163
+ 7: Construct approximation of current skill $z$
164
+ 8: Update $\mathrm{g}_{\phi}$ and $\mathrm{f}_{\theta}$ by Equation (1) with $(s_t, a_t^*, v)$
165
+ 9: Update $\mathrm{g}_{\tilde{\phi}}^{\mathrm{prior}}$ and $\mathrm{f}_{\tilde{\theta}}$ by Equation (1) with $(s_t, a_t^*, \mathrm{v}_{\mathrm{cur}})$
166
+ 10: Update $\mathrm{g}_{\phi}$ and $\mathrm{g}_{\tilde{\phi}}^{\mathrm{prior}}$ by Equation (5) with $(s_t, v, \mathrm{v}_{\mathrm{cur}})$
167
+ 11: Update $\mathrm{g}_{\phi}$ and $\mathrm{f}_{\psi}^{\mathrm{skill}}$ by Equation (6) with $(s_t, z, v)$
168
+ 12: end for
169
+
170
+ # 5 EXPERIMENTS
171
+
172
+ # 5.1 EXPERIMENT SETUP
173
+
174
+ To validate our method, we conduct empirical evaluations on two different robotic environments, Franka Kitchen (Gupta et al., 2020) and Meta World (Yu et al., 2020). The visualization of these two environments is presented in Figure 2.
175
+
176
+ In Franka Kitchen (FK), we control a Franka Panda robot in the kitchen environment to perform seven possible tasks: microwave (M), kettle (K), bottom burner (B), top burner (T), light switch (L), slide cabinet (S) and hinge cabinet (H). The dataset from the original paper (Gupta et al., 2020) contains 566 trajectories corresponding to 24 different task combinations. To enable video-conditioned policy training, we train expert policy using the
177
+
178
+ ![](images/8bba642034117d063c11bcf50d2f0eb54979782b3f9ee55fc20acbd93bb12003.jpg)
179
+ (a) Frank Kitchen
180
+
181
+ ![](images/b328aae3d46ae810cbf57c69666de832383a175c43f72b176cf7e505c2f2aea5.jpg)
182
+ (b) Meta World
183
+ Figure 2: Visualization of Experiment Environments
184
+
185
+ original dataset to collect trajectories and corresponding video demonstrations. To evaluate the one-shot imitation learning ability, we split the dataset into a training dataset and a test dataset, where the training dataset contains 17 different task combinations and the test dataset contains 7 different task combinations, and there is no overlap of task combinations between the training set and the test set. During testing, we sample 3 different video demonstrations for each task combination, run the evaluation 10 times, and report the average performance.
186
+
187
+ In Meta world (MW), we modify the original environment (Yu et al., 2020) to perform multiple tasks within a single episode. In this newly devised environment, we control a Sawyer robot to perform four possible tasks: close drawer (D), open door (O), push button (B) and open window (W). We use expert policy to collect the dataset for all 24 different tasks. It is worth noting that, as the dataset contains all possible task combinations, the task orders presented in the video demonstration bring additional difficulties for policy learning. To evaluate the one-shot imitation learning ability, we split the dataset into a training dataset and a test dataset, where the training dataset contains 17 different task combinations and the test dataset contains 7 different task combinations, and there is no overlap of task combinations between the training set and the test set. During testing, we sample 3 different video demonstrations for each task combination, run the evaluation 10 times, and report the average performance.
188
+
189
+ We include several challenging imitation learning algorithms as our baselines: C-bet (Cui et al., 2022), decision transformer (Chen et al., 2021b), and VIMA (Jiang et al., 2023). As C-bet and decision transformer were not proposed for video-conditioned policy learning, we modify them to additionally take videos as input and get baselines V-BET and V-DT. For VIMA, it was originally proposed for multimodal prompts. However, as we do not assume data of other modalities, we train VIMA on our video-only dataset and serve as our baseline VIMA. More details of experiments can be found in Appendix A.
190
+
191
+ # 5.2 EXPERIMENT: MAIN
192
+
193
+ Table 1: The performance of WL-DM and baselines on all FK and MW tasks.
194
+
195
+ <table><tr><td rowspan="2">Env</td><td rowspan="2">Methods</td><td colspan="7">Tasks</td><td rowspan="2">Avg</td></tr><tr><td>MBTH</td><td>MBLS</td><td>MBTL</td><td>KBTH</td><td>MTLH</td><td>KBLS</td><td>KBTS</td></tr><tr><td rowspan="4">FK</td><td>WL-DM</td><td>2.30</td><td>2.57</td><td>2.37</td><td>2.10</td><td>1.83</td><td>1.97</td><td>3.17</td><td>2.33±0.78</td></tr><tr><td>V-BET</td><td>1.37</td><td>2.47</td><td>0.83</td><td>2.27</td><td>1.73</td><td>1.80</td><td>3.10</td><td>1.94±1.06</td></tr><tr><td>V-DT</td><td>1.00</td><td>2.33</td><td>1.70</td><td>1.33</td><td>1.47</td><td>1.70</td><td>2.10</td><td>1.66±0.79</td></tr><tr><td>VIMA</td><td>0.77</td><td>0.50</td><td>0.50</td><td>1.27</td><td>0.20</td><td>1.67</td><td>1.70</td><td>0.94±0.85</td></tr><tr><td></td><td></td><td>ODWB</td><td>DOBW</td><td>DBWO</td><td>WBOD</td><td>BDOW</td><td>BDWO</td><td>BWDO</td><td></td></tr><tr><td rowspan="4">MW</td><td>WL-DM</td><td>3.33</td><td>2.00</td><td>2.00</td><td>2.00</td><td>2.67</td><td>2.00</td><td>4.00</td><td>2.57±0.90</td></tr><tr><td>V-BET</td><td>1.87</td><td>2.00</td><td>0.73</td><td>1.33</td><td>0.33</td><td>0.00</td><td>1.97</td><td>1.18±0.85</td></tr><tr><td>V-DT</td><td>1.33</td><td>2.13</td><td>1.23</td><td>1.93</td><td>0.37</td><td>0.83</td><td>0.83</td><td>1.24±0.86</td></tr><tr><td>VIMA</td><td>1.80</td><td>1.00</td><td>0.37</td><td>0.37</td><td>1.17</td><td>0.57</td><td>0.83</td><td>0.87±0.84</td></tr></table>
196
+
197
+ As shown in Table 1, our method achieves better average performance in both environments. In Franka Kitchen, our method achieves an improvement of approximately $20.1\%$ compared to the best baseline (V-BET). In Meta World, the improvement is even more significant, with our method achieving a $100.1\%$ improvement compared to the best baseline (V-DT). Although our method consistently outperforms all baselines, we note that there is a large gap in terms of the degree of improvement between the two environments. This is because, in Franka Kitchen, task combinations in the dataset are not diverse enough, and there is no variation in terms of the task orders (A, B vs B, A). Therefore, it can be considered to have a strong correlation within the task combinations. Therefore, even without considering the segmentation of tasks in video demonstrations, we can still utilize this correlation to achieve a policy that performs well during testing. However, in Meta World, our dataset includes all task combinations and considers different task orders, making task segmentation in videos even more critical, which explains the gap in improvement of our algorithm in the two environments. More specifically, out of a total of 14 test tasks, our method achieves the best performance in 12 of them. Such overall performance validates the effectiveness of our algorithm.
198
+
199
+ For the performance of baselines, we found that V-BET and V-DT perform at a similar level. This is because the main difference between V-BET and V-DT in our implementation is whether or not action is used as part of the trajectory. For the robotic environments we used, this action information can often be directly inferred from changes in the state of the robot. Therefore, the advantage of using action information is not significant. VIMA, on the other hand, does not perform well in
200
+
201
+ both environments. One potential reason is that VIMA was originally proposed for multimodal scenarios. Although its training process can be transferred to the pure video scenario, this direct transfer is clearly not effective. Moreover, in terms of one-shot video imitation, VIMA mainly considers object-level variations in a single task rather than variations in task combinations, so we believe its performance decline is acceptable.
202
+
203
+ # 5.3 EXPERIMENT: ABLATION
204
+
205
+ Table 2: The performance of WL-DM and ablation baselines on all MW tasks.
206
+
207
+ <table><tr><td rowspan="2">Env</td><td rowspan="2">Methods</td><td colspan="7">Tasks</td><td rowspan="2">Avg</td></tr><tr><td>ODWB</td><td>DOBW</td><td>DBWO</td><td>WBOD</td><td>BDOW</td><td>BDWO</td><td>BWDO</td></tr><tr><td rowspan="4">MW</td><td>WL-DM</td><td>3.33</td><td>2.00</td><td>2.00</td><td>2.00</td><td>2.67</td><td>2.00</td><td>4.00</td><td>2.57±0.90</td></tr><tr><td>WL-DM w/o LMI+</td><td>1.00</td><td>2.00</td><td>1.83</td><td>1.67</td><td>1.67</td><td>1.67</td><td>2.00</td><td>1.69±0.46</td></tr><tr><td>WL-DM w/o LMI-</td><td>2.00</td><td>2.00</td><td>2.00</td><td>2.00</td><td>1.67</td><td>2.00</td><td>3.33</td><td>2.14±0.64</td></tr><tr><td>WL-DM w ∅</td><td>2.00</td><td>2.13</td><td>1.93</td><td>2.00</td><td>2.00</td><td>1.33</td><td>2.00</td><td>1.91±0.37</td></tr></table>
208
+
209
+ As our objective function contains two different mutual information terms, we conduct ablation studies in this section to verify the contribution of these two components to our method. We construct two ablation baselines, WL-DM w/o $\mathcal{L}_{\mathrm{MI}^+}$ and WL-DM w/o $\mathcal{L}_{\mathrm{MI}^-}$ . The ablation baselines are identical to our algorithm in all aspects, except that WL-DM w/o $\mathcal{L}_{\mathrm{MI}^+}$ does not use Equation (6) and WL-DM w/o $\mathcal{L}_{\mathrm{MI}^-}$ does not use Equation (5). We validate these two ablation baselines in the Meta World environment and compare them with our method. As shown in Table 2, both ablation baselines achieve worse performance compared to WL-DM, thereby verifying that both Equation (5) and Equation (6) contribute to our algorithm. Notably, even with only Equation (5), the ablation baseline still achieves better performance than the baselines in Section 5.2, which further validates the effectiveness of Theorem 1 in practice.
210
+
211
+ As mentioned in Section 4.3, directly minimizing the mutual information using prior $g(h_v|s)$ can lead to excessive loss of video information in practice, thereby affecting the performance of the algorithm. To verify this point, we construct another ablation baseline WL-DM w $\varnothing$ . This baseline is again identical to our method in all aspects, except that it does not use $\mathrm{g}_{\tilde{\phi}}^{\mathrm{prior}}(h_v|s,\tilde{\nu}_{\mathrm{cur}})$ but instead uses prior $g(h_v|s)$ . From Table 2, we can see that using $g(h_v|s)$ indeed leads to a decline in the performance, thus verifying the prior video encoder we constructed in Section 4.3. Additionally, it is worth noting that WL-DM w/o $\mathcal{L}_{\mathrm{MI}^{-}}$ demonstrates that when we do not minimize mutual information at all, that is, do not control the information provided by the video, the algorithm cannot achieve its best performance. In contrast, WL-DM w $\varnothing$ indicates that excessively reducing the information of the video also leads to a decline in the final performance. This phenomenon further demonstrates the importance of a proper approximation for $\mathrm{v}_{\mathrm{cur}}$ and validates our statements in Section 4.3.
212
+
213
+ # 6 CONCLUSION
214
+
215
+ In this paper, we investigate the problem of one-shot video imitation for video-conditioned policies. To enhance the compositional generalization ability of the learned policy, we propose an imitation learning framework, Watch-Less-Do-More (WL-DM). Our method introduces an information bottleneck-based objective, which leads to implicit skill discovery for video-conditioned policies. The intuition behind this method is that by segmenting the video into different tasks, the policy learns diverse skills corresponding to these tasks. When faced with unseen videos, the policy can also decompose them into combinations of previously encountered tasks, thereby completing these tasks through the learned skills. To better explain our method, we build a theoretical connection between our method and this intuition using information theory. We also present a practical implementation of our algorithm and evaluate it on a variety of tasks across multiple environments. The experimental results indicate that our algorithm outperforms baselines in terms of the compositional generalization ability, which verifies the effectiveness of our algorithm.
216
+
217
+ # 7 LIMITATIONS AND FUTURE WORK
218
+
219
+ The limitation of this work is that our approximations of the current task $\mathrm{k_{cur}}$ and the current skill $z$ remain naive, which are just the information from the immediate future. Although similar approximations have been used in many previous studies (Pertsch et al., 2021; Liu et al., 2021; Xie et al., 2023; Yuan et al., 2024), one issue with this approach is the need for video data to be aligned with trajectory data in timesteps (for a state $s_t$ , we can access its corresponding video frame $f_t$ ), which can be costly to collect in many cases.
220
+
221
+ Our future work will primarily focus on extending our algorithm to real-world scenarios, such as real-world robots, thereby broadening the application scope of our algorithm. Additionally, we will consider extending our algorithm to multimodal scenarios, utilizing multimodal information to obtain better approximations of tasks and skills, thereby not only enhancing the performance of the algorithm but also expanding the range of instruction formats it can process.
222
+
223
+ # ACKNOWLEDGMENTS
224
+
225
+ This work was supported by NSFC under Grant 62450001 and 62476008. The authors would like to thank the anonymous reviewers for their valuable comments and advice.
226
+
227
+ # REFERENCES
228
+
229
+ Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (vpt) learning to act by watching unlabeled online videos. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pp. 24639-24654, 2022.
230
+ Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 1877-1901, 2020.
231
+ Shaofei Cai, Bowei Zhang, Zihao Wang, Xiaojian Ma, Anji Liu, and Yitao Liang. GROOT: Learning to follow instructions by watching gameplay videos. In The Twelfth International Conference on Learning Representations, 2024.
232
+ Thomas Carta, Clément Romac, Thomas Wolf, Sylvain Lamprier, Olivier Sigaud, and Pierre-Yves Oudeyer. Grounding large language models in interactive environments with online reinforcement learning. In International Conference on Machine Learning, pp. 3676-3713. PMLR, 2023.
233
+ Wilka Carvalho, Angelos Filos, Richard L Lewis, Satinder Singh, et al. Composing task knowledge with modular successor feature approximators. arXiv preprint arXiv:2301.12305, 2023.
234
+ Elliot Chane-Sane, Cordelia Schmid, and Ivan Laptev. Learning video-conditioned policies for unseen manipulation tasks. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 909-916. IEEE, 2023.
235
+ Annie S Chen, Suraj Nair, and Chelsea Finn. Learning generalizable robotic reward functions from "in-the-wild" human videos. arXiv preprint arXiv:2103.16817, 2021a.
236
+ Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: reinforcement learning via sequence modeling. In Proceedings of the 35th International Conference on Neural Information Processing Systems, pp. 15084-15097, 2021b.
237
+ Thomas M Cover. Elements of information theory. John Wiley & Sons, 1999.
238
+ Zichen Jeff Cui, Yibin Wang, Nur Muhammad Mahi Shafiullah, and Lerrel Pinto. From play to policy: Conditional behavior generation from uncurated robot data. arXiv preprint arXiv:2210.10047, 2022.
239
+ Sudeep Dasari and Abhinav Gupta. Transformers for one-shot visual imitation. In Conference on Robot Learning, pp. 2071-2084. PMLR, 2021.
240
+
241
+ Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1087-1098, 2017.
242
+ Alejandro Escontrela, Adcmi Adcniji, Wilson Yan, Ajay Jain, Xue Bin Peng, Ken Goldberg, Youngwoon Lee, Danijar Hafner, and Pieter Abbeel. Video prediction models as rewards for reinforcement learning. In Proceedings of the 37th International Conference on Neural Information Processing Systems, pp. 68760-68783, 2023.
243
+ Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine. Robust predictable control. In Proceedings of the 35th International Conference on Neural Information Processing Systems, pp. 27813-27825, 2021.
244
+ Chrisantus Eze and Christopher Crick. Learning by watching: A review of video-based learning approaches for robot manipulation. arXiv preprint arXiv:2402.07127, 2024a.
245
+ Chrisantus Eze and Christopher Crick. Learning by watching: A review of video-based learning approaches for robot manipulation. arXiv preprint arXiv:2402.07127, 2024b.
246
+ Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imitation learning via meta-learning. In Conference on robot learning, pp. 357-368. PMLR, 2017.
247
+ Anirudh Goyal, Riashat Islam, DJ Strouse, Zafarali Ahmed, Hugo Larochelle, Matthew Botvinick, Yoshua Bengio, and Sergey Levine. Infobot: Transfer and exploration via the information bottleneck. In International Conference on Learning Representations, 2018.
248
+ Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. In *Conference on Robot Learning*, pp. 1025–1037. PMLR, 2020.
249
+ Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1-35, 2017.
250
+ Vidhi Jain, Maria Attarian, Nikhil J Joshi, Ayzaan Wahid, Danny Driess, Quan Vuong, Pannag R Sanketi, Pierre Sermanet, Stefan Welker, Christine Chan, et al. Vid2robot: End-to-end video-conditioned policy learning with cross-attention transformers. arXiv preprint arXiv:2403.12943, 2024.
251
+ Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, and Chelsea Finn. Bc-z: Zero-shot task generalization with robotic imitation learning. In *Conference on Robot Learning*, pp. 991–1002. PMLR, 2022.
252
+ Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, and Linxi Fan. Vima: robot manipulation with multimodal prompts. In Proceedings of the 40th International Conference on Machine Learning, pp. 14975-15022, 2023.
253
+ Seungjae Lee, Yibin Wang, Haritheja Etukuru, H Jin Kim, Nur Muhammad Mahi Shafiullah, and Lerrel Pinto. Behavior generation with latent actions. arXiv preprint arXiv:2403.03181, 2024.
254
+ Baihan Lin, Djallel Boueffouf, and Irina Rish. A survey on compositional generalization in applications. arXiv preprint arXiv:2302.01067, 2023.
255
+ Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, and Anima Anandkumar. Coach-player multi-agent reinforcement learning for dynamic team composition. In International Conference on Machine Learning, pp. 6860-6870. PMLR, 2021.
256
+ Minghuan Liu, Menghui Zhu, and Weinan Zhang. Goal-conditioned reinforcement learning: Problems and solutions. arXiv preprint arXiv:2201.08299, 2022.
257
+ Lajanugen Logeswaran, Wilka Carvalho, and Honglak Lee. Learning compositional tasks from language instructions. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 13300-13308, 2023.
258
+
259
+ Zhao Mandi, Fangchen Liu, Kimin Lee, and Pieter Abbeel. Towards more generalizable one-shot visual imitation learning. In 2022 International Conference on Robotics and Automation (ICRA), pp. 2434-2444. IEEE, 2022.
260
+ Robert McCarthy, Daniel CH Tan, Dominik Schmidt, Fernando Acero, Nathan Herr, Yilun Du, Thomas G Thuruthel, and Zhibin Li. Towards generalist robot learning from internet video: A survey. arXiv preprint arXiv:2404.19664, 2024.
261
+ Suraj Nair, Eric Mitchell, Kevin Chen, Silvio Savarese, Chelsea Finn, et al. Learning language-conditioned robot behavior from offline data and crowd-sourced annotation. In Conference on Robot Learning, pp. 1303-1315. PMLR, 2022.
262
+ Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. R3m: A universal visual representation for robot manipulation. In Conference on Robot Learning, pp. 892-909. PMLR, 2023.
263
+ Soroush Nasiriany, Vitchyr H Pong, Steven Lin, and Sergey Levine. Planning with goal-conditioned policies. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 14843-14854, 2019.
264
+ Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-shot task generalization with multi-task deep reinforcement learning. In International Conference on Machine Learning, pp. 2661-2670. PMLR, 2017.
265
+ Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pp. 27730-27744, 2022.
266
+ Karl Pertsch, Youngwoon Lee, and Joseph Lim. Accelerating reinforcement learning with learned skill priors. In Conference on robot learning, pp. 188-204. PMLR, 2021.
267
+ Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1134–1141. IEEE, 2018.
268
+ Sangwoo Shin, Daehee Lee, Minjong Yoo, Woo Kyung Kim, and Honguk Woo. One-shot imitation in a non-stationary environment via multi-modal skill. In Proceedings of the 40th International Conference on Machine Learning, pp. 31562-31578, 2023.
269
+ Sangwoo Shin, Minjong Yoo, Jeongwoo Lee, and Honguk Woo. Semtra: A semantic skill translator for cross-domain zero-shot policy adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 15000-15008, 2024.
270
+ Sam Spilsbury and Alexander Ilin. Compositional generalization in grounded language learning via induced model sparsity. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pp. 143-155, 2022.
271
+ Elias Stengel-Eskin, Andrew Hundt, Zhuohong He, Aditya Murali, Nakul Gopalan, Matthew Gombelay, and Gregory Hager. Guiding multi-step rearrangement tasks with natural language instructions. In Conference on Robot Learning, pp. 1486-1501. PMLR, 2022.
272
+ Richard S Sutton. Reinforcement learning: An introduction. A Bradford Book, 2018.
273
+ Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In 2015 IEEE information theory workshop (itw), pp. 1-5. IEEE, 2015.
274
+ Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
275
+
276
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
277
+ Chen Wang, Linxi Fan, Jiankai Sun, Ruohan Zhang, Li Fei-Fei, Danfei Xu, Yuke Zhu, and Anima Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. In Conference on Robot Learning, pp. 201-221. PMLR, 2023.
278
+ Philipp Wu, Kourosh Hakhamaneshi, Yuqing Du, Igor Mordatch, Aravind Rajeswaran, and Pieter Abbeel. Semi-supervised one-shot imitation learning. arXiv preprint arXiv:2408.05285, 2024.
279
+ Zhihui Xie, Zichuan Lin, Deheng Ye, Qiang Fu, Yang Wei, and Shuai Li. Future-conditioned unsupervised pretraining for decision transformer. In International Conference on Machine Learning, pp. 38187-38203. PMLR, 2023.
280
+ Mengda Xu, Zhenjia Xu, Cheng Chi, Manuela Veloso, and Shuran Song. Xskill: Cross embodiment skill discovery. In Conference on Robot Learning, pp. 3536-3555. PMLR, 2023.
281
+ Karmesh Yadav, Arjun Majumdar, Ram Ramrakhya, Naoki Yokoyama, Alexei Baevski, Zsolt Kira, Oleksandr Maksymets, and Dhruv Batra. Ovrl-v2: A simple state-of-art baseline forImagenav and objectnav. arXiv preprint arXiv:2303.07798, 2023a.
282
+ Karmesh Yadav, Ram Ramrakhya, Arjun Majumdar, Vincent-Pierre Berges, Sachit Kuhar, Dhruv Batra, Alexei Baevski, and Oleksandr Maksymets. Offline visual representation learning for embodied navigation. In Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023b.
283
+ Xue Ying. An overview of overfitting and its solutions. In Journal of physics: Conference series, volume 1168, pp. 022022. IOP Publishing, 2019.
284
+ Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on robot learning, pp. 1094-1100. PMLR, 2020.
285
+ Haoqi Yuan, Zhancun Mu, Feiyang Xie, and Zongqing Lu. Pre-training goal-based models for sample-efficient reinforcement learning. In The Twelfth International Conference on Learning Representations, 2024.
286
+ Bohan Zhou, Jiangxing Wang, and Zongqing Lu. Nolo: Navigate only look once. arXiv preprint arXiv:2408.01384, 2024.
287
+
288
+ # A IMPLEMENTATION DETAILS
289
+
290
+ We implement our algorithm and all baselines based on the codebase of C-bet (Cui et al., 2022). For WL-DM and V-BET, we consider only observations in the trajectory, and for V-DT and VIMA, we consider both observations and actions in the trajectory, which aligns with the implementation stated in the paper of C-bet (Cui et al., 2022), DT Chen et al. (2021b) and VIMA (Jiang et al., 2023). For WL-DM, V-BET, and V-DT, we use the same transformer model as stated in C-bet, which contains multiple self-attention layers to process video information and trajectory information at the same time. For VIMA, we use alternating cross-attention and self-attention layers as described in its paper (Jiang et al., 2023).
291
+
292
+ For all experiments, we set the learning rate to be $3 \times 10^{-4}$ and set the window size for the trajectory to be 20 (for V-DT and VIMA, it means 20 observation-action pairs). For WL-DM, the window size of future video segments is sampled from [20, 40]. As we use the codebase of C-bet, all methods use the same action decoder, where we set the number of bins for action discretization to 32, and the id of each cluster will also be used for the representation of skills for WL-DM. For the Franka Kitchen environment (Gupta et al., 2020), we use decoders with 3 layers, and 3 heads and set the hidden dimension to be 60 (for VIMA, it means in total 3 self-attention layers and 3 cross-attention layers). We train all methods for 10 epochs. For WL-DM, $\alpha_{1}$ is fixed to be $1 \times 10^{-2}$ and $\alpha_{2}$ is fixed to be $1 \times 10^{-1}$ during the training process. For the Meta World environment (Yu et al., 2020), we use decoders with 6 layers, and 6 heads and set the hidden dimension to be 120 (for VIMA, it means in total 6 self-attention layers and 6 cross-attention layers). We train all methods for 30 epochs. For WL-DM, $\alpha_{1}$ is set to be 0 in the beginning and fixed to be $1 \times 10^{-3}$ after 10 epochs, and $\alpha_{2}$ is fixed to be 10 during the training process.
293
+
294
+ # B PROOF OF THEOREM 1
295
+
296
+ Theorem 1. If we have $\mathrm{MI}(h_v;\mathrm{v}_{\mathrm{other}}|s,\mathrm{v}_{\mathrm{cur}}) = 0$ , then $D_{\mathrm{KL}}\big(\pi (a|s,v)||\pi (a|s,\mathrm{v}_{\mathrm{cur}})\big) = 0$ for all state-video pairs $(s,v)\in S\times \mathcal{V}$ with non-zero probability $P(s,v) > 0$ .
297
+
298
+ Proof. By expanding the mutual information $\mathrm{MI}(\mathrm{v}_{\mathrm{other}};h_v,a|s,\mathrm{v}_{\mathrm{cur}})$ , we can have the following equality:
299
+
300
+ $$
301
+ \begin{array}{l} \mathrm {M I} \left(\text {v} _ {\text {o t h e r}}; h _ {v}, a | s, \mathrm {v} _ {\text {c u r}}\right) \\ = \mathbb {E} _ {P (s, \mathrm {v} _ {\text {c u r}})} \mathbb {E} _ {P (\mathrm {v} _ {\text {o t h e r}}, h _ {v}, a | s, \mathrm {v} _ {\text {c u r}})} \left[ \log \frac {P (\mathrm {v} _ {\text {o t h e r}} , h _ {v} , a | s , \mathrm {v} _ {\text {c u r}})}{P (\mathrm {v} _ {\text {o t h e r}} | s , \mathrm {v} _ {\text {c u r}}) P (h _ {v} , a | s , \mathrm {v} _ {\text {c u r}})} \right] \\ = \mathbb {E} _ {P (s, \mathrm {v} _ {\text {c u r}})} \mathbb {E} _ {P (\mathrm {v} _ {\text {o t h e r}}, h _ {v}, a | s, \mathrm {v} _ {\text {c u r}})} \left[ \log P (h _ {v}, a | s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}}) - \log P (h _ {v}, a | s, \mathrm {v} _ {\text {c u r}}) \right] \\ = \mathbb {E} _ {P (s, \mathrm {v} _ {\text {c u r}})} \mathbb {E} _ {P (\mathrm {v} _ {\text {o t h e r}}, h _ {v}, a | s, \mathrm {v} _ {\text {c u r}})} \left[ \begin{array}{l} \log P (h _ {v} | s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}}) + P (a | h _ {v}, s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}}) \\ - \log P (h _ {v} | s, \mathrm {v} _ {\text {c u r}}) - \log P (a | h _ {v}, s, \mathrm {v} _ {\text {c u r}}) \end{array} \right] \\ = \mathbb {E} _ {P (s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}})} \left[ D _ {\mathrm {K L}} \left(P \left(h _ {v} | s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}}\right) | | P \left(h _ {v} | s, \mathrm {v} _ {\text {c u r}}\right)\right) \right] \\ + \mathbb {E} _ {P \left(h _ {v}, s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}}\right)} \left[ D _ {\mathrm {K L}} \left(P \left(a | h _ {v}, s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}}\right) \mid \mid P \left(a | h _ {v}, s, \mathrm {v} _ {\text {c u r}}\right)\right) \right] \\ = \operatorname {M I} \left(h _ {v}; \text {v o t h e r} | s, \text {v c u r}\right) + \operatorname {M I} \left(a; \text {v o t h e r} | s, \text {v c u r}, h _ {v}\right). \\ \end{array}
302
+ $$
303
+
304
+ Similarly, we can also have:
305
+
306
+ $$
307
+ \mathrm {M I} (\mathrm {v} _ {\text {o t h e r}}; h _ {v}, a | s, \mathrm {v} _ {\text {c u r}}) = \mathrm {M I} (a; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}) + \mathrm {M I} (h _ {v}; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}, a).
308
+ $$
309
+
310
+ Combining these two equality, we can have:
311
+
312
+ $$
313
+ \begin{array}{l} \mathrm {M I} \left(h _ {v}; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}\right) + \mathrm {M I} \left(a; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}, h _ {v}\right) \\ = \operatorname {M I} (a; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}) + \operatorname {M I} (h _ {v}; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}, a). \\ \end{array}
314
+ $$
315
+
316
+ As $a$ and $\mathrm{v}_{\text{other}}$ become independent with each other when $h_v$ is given, we have $\mathrm{MI}(a; \mathrm{v}_{\text{other}} | s, \mathrm{v}_{\text{cur}}, h_v) = 0$ . As we also have $\mathrm{MI}(h_v; \mathrm{v}_{\text{other}} | s, \mathrm{v}_{\text{cur}}, a) \geq 0$ , we can have the
317
+
318
+ following inequality, which basically gives us the conditional version of data processing inequality (Cover, 1999):
319
+
320
+ $$
321
+ \operatorname {M I} \left(h _ {v}; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}\right) \geq \operatorname {M I} \left(a; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}\right).
322
+ $$
323
+
324
+ Since $\mathrm{MI}(a;\mathrm{v}_{\mathrm{other}}|s,\mathrm{v}_{\mathrm{cur}})\geq 0$ , if we can also have $\mathrm{MI}(h_v;\mathrm{v}_{\mathrm{other}}|s,\mathrm{v}_{\mathrm{cur}}) = 0$ , then we can conclude that:
325
+
326
+ $$
327
+ \operatorname {M I} \left(a; \mathrm {v} _ {\text {o t h e r}} | s, \mathrm {v} _ {\text {c u r}}\right) = 0.
328
+ $$
329
+
330
+ By expanding this mutual information term, we have:
331
+
332
+ $$
333
+ \begin{array}{l} \mathrm {M I} (a; \mathrm {v o t h e r} | s, \mathrm {v c u r}) \\ = \mathbb {E} _ {P (s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}})} \left[ D _ {\mathrm {K L}} \left(\pi \left(a | s, \mathrm {v} _ {\text {c u r}}, \mathrm {v} _ {\text {o t h e r}}\right) | | \pi \left(a | s, \mathrm {v} _ {\text {c u r}}\right)\right) \right] \\ = \mathbb {E} _ {P (s, v)} \left[ D _ {\mathrm {K L}} \left(\pi (a | s, v) | | \pi (a | s, \mathrm {v} _ {\mathrm {c u r}})\right) \right] \\ = 0. \\ \end{array}
334
+ $$
335
+
336
+ Since the KL divergence is non-negative, for the above expectation to be zero, there must be for all state-video pairs $(s,v)\in S\times \mathcal{V}$ with non-zero probability $P(s,v) > 0$ , we have the KL divergence to be zero, $\bar{D}_{\mathrm{KL}}(\pi (a|s,v)||\pi (a|s,\mathrm{v_{cur}})) = 0$ and conclude our proof.
337
+
338
+ # C VISUALIZATION
339
+
340
+ ![](images/1e618b1bb62118812c5ee380512f6925b8154540f0896ff1b7e276091b790146.jpg)
341
+ Figure 3: Visualization of $h_v$ over timesteps.
342
+
343
+ In this section, we present the visualization result of our method. We visualize how $h_v$ of WL-DL changes over timesteps. As shown in Figure 3, we can observe that $h_v$ of WL-DM tends to converge at adjacent timesteps. It is worth noting that since we use a GPT-like transformer architecture as the encoder, the information of video tokens and obs tokens are mixed together in $h_v$ . Furthermore, we do not introduce any task-level information (such as task-level video segmentation annotations), so the clustering results of $h_v$ do not fully correspond to the task.
344
+
345
+ # D ADDITIONAL EXPERIMENTS
346
+
347
+ # D.1 ABLATION: TYPES OF TASK COMBINATIONS
348
+
349
+ We further include an experiment about the effect of the number of task combinations in the training set in the Meta World environment. In Section 5.2, we included 17 task combinations (7/3 split) in the training set, and here we further consider cases where we have 15 task combinations (6/4 split) and 20 task combinations in the training set. As shown Table 3, experimental results, WL-DM still outperforms other baselines, further demonstrating the effectiveness of our method.
350
+
351
+ <table><tr><td></td><td>WL-DM</td><td>V-BET</td><td>V-DT</td><td>VIMA</td></tr><tr><td>6/4</td><td>1.93±0.26</td><td>1.34±0.78</td><td>0.86±0.84</td><td>0.43±0.78</td></tr><tr><td>7/3 (main exp)</td><td>2.57±0.90</td><td>1.18±0.85</td><td>1.24±0.86</td><td>0.87±0.84</td></tr><tr><td>8/2</td><td>1.88±0.36</td><td>0.63±0.86</td><td>0.94±0.88</td><td>0.81±0.61</td></tr></table>
352
+
353
+ # D.2 ABLATION: NUMBER OF VIDEOS FOR EACH TASK COMBINATION
354
+
355
+ We also include an experiment about the number of videos corresponding to each task combination in the Meta World environment. In Section 5.2 we considered 20 different videos for each task combination, and here we further consider cases with 40 different videos for each task combination. As shown Table 4, experimental results, WL-DM still outperforms other baselines, which again demonstrates the effectiveness of WL-DM.
356
+
357
+ Table 3: The performance of all methods with different number of task combinations in the training set on MW tasks.
358
+
359
+ <table><tr><td></td><td>WL-DM</td><td>V-BET</td><td>V-DT</td><td>VIMA</td></tr><tr><td>20 (main exp)</td><td>2.57±0.90</td><td>1.18±0.85</td><td>1.24±0.86</td><td>0.87±0.84</td></tr><tr><td>40</td><td>2.21±0.81</td><td>1.71±0.63</td><td>1.33±0.90</td><td>1.09±0.63</td></tr></table>
360
+
361
+ # D.3 MORE BASELINE: VIP
362
+
363
+ Table 4: The performance of all methods with different number of videos for each task combination on MW tasks.
364
+
365
+ <table><tr><td rowspan="2">Env</td><td rowspan="2">Methods</td><td colspan="7">Tasks</td><td rowspan="2">Avg</td></tr><tr><td>ODWB</td><td>DOBW</td><td>DBWO</td><td>WBOD</td><td>BDOW</td><td>BDWO</td><td>BWDO</td></tr><tr><td rowspan="5">MW</td><td>WL-DM</td><td>3.33</td><td>2.00</td><td>2.00</td><td>2.00</td><td>2.67</td><td>2.00</td><td>4.00</td><td>2.57±0.90</td></tr><tr><td>V-BET</td><td>1.87</td><td>2.00</td><td>0.73</td><td>1.33</td><td>0.33</td><td>0.00</td><td>1.97</td><td>1.18±0.85</td></tr><tr><td>V-DT</td><td>1.33</td><td>2.13</td><td>1.23</td><td>1.93</td><td>0.37</td><td>0.83</td><td>0.83</td><td>1.24±0.86</td></tr><tr><td>VIMA</td><td>1.80</td><td>1.00</td><td>0.37</td><td>0.37</td><td>1.17</td><td>0.57</td><td>0.83</td><td>0.87±0.84</td></tr><tr><td>ViP</td><td>2.80</td><td>2.20</td><td>2.00</td><td>1.87</td><td>1.63</td><td>1.10</td><td>1.80</td><td>1.91±0.88</td></tr></table>
366
+
367
+ Table 5: The performance of all methods including ViP on all MW tasks.
368
+
369
+ We have added a new video-conditioned baseline: ViP (Chane-Sane et al., 2023). Although the purpose of ViP is to learn a video-conditioned policy, it has a different setting from our method. Thus, we have made the following modifications to adapt it to our setting:
370
+
371
+ - Since we do not consider human videos as input, we have removed the part that uses human videos for pre-training.
372
+
373
+ - Since we do not assume access to video labels, we have changed its supervised contrastive learning part to unsupervised contrastive learning on robot videos.
374
+
375
+ We used the same codebase as WL-DM to implement ViP with minimal modifications, and conducted experiments in the MetaWorld environment. The experimental results are shown in Table 5. ViP outperformed other baselines in this environment, demonstrating its effectiveness as video-conditioned policy. However, its performance still lags behind WL-DM, which further demonstrates the effectiveness of WL-DM.
376
+
377
+ # D.4 MORE DATASET
378
+
379
+ To further evaluate our method, we construct script in a similar way of Lee et al. (2024) to convert state-base observations of the original dataset of the Franka Kitchen environment into image-based observation, and train all methods on this dataset. For WL-DM, we use a linear schedule for $\alpha_{1}$ , where coef_start is set to be 0 and coef_end is set to be $1 \times 10^{-4}$ , and for $\alpha_{2}$ , we fix it to be $1 \times 10^{-1}$ during the training process. As shown in Table 6, WL-DM still outperforms other methods, which further demonstrates the effectiveness of our method.
380
+
381
+ <table><tr><td rowspan="2">Env</td><td rowspan="2">Methods</td><td colspan="7">Tasks</td><td rowspan="2">Avg</td></tr><tr><td>BTLS</td><td>BTSH</td><td>MBTS</td><td>MBTH</td><td>MLSH</td><td>MBTL</td><td>MKBH</td></tr><tr><td rowspan="4">FK(new)</td><td>WL-DM</td><td>2.43</td><td>1.63</td><td>2.63</td><td>2.57</td><td>2.27</td><td>2.27</td><td>2.5</td><td>2.33±0.74</td></tr><tr><td>V-BET</td><td>1.23</td><td>1.43</td><td>2.33</td><td>1.53</td><td>2.10</td><td>1.70</td><td>2.17</td><td>1.79±0.80</td></tr><tr><td>V-DT</td><td>1.63</td><td>2.20</td><td>1.40</td><td>1.80</td><td>1.47</td><td>1.73</td><td>2.20</td><td>1.78±0.83</td></tr><tr><td>VIMA</td><td>0.87</td><td>1.27</td><td>2.20</td><td>1.80</td><td>1.80</td><td>1.43</td><td>2.23</td><td>1.66±0.80</td></tr></table>
382
+
383
+ # D.5 MORE BASE ALGORITHM
384
+
385
+ As WL-DM can be seen as a method using information bottleneck-based loss on top of V-BET. To further validate our approach, we applied the information bottleneck-based loss of WL-DM to both V-DT and VIMA and conducted experiments in the Meta World environment. As shown in Table 7, WL-DM+V-DT and WL-DM+VIMA both outperform its base algorithm, which further validates the effectiveness of the proposed information bottleneck-based loss.
386
+
387
+ Table 6: The performance of all methods on new FK dataset.
388
+
389
+ <table><tr><td></td><td>WL-DM</td><td>V-BET</td><td>WL-DM+V-DT</td><td>V-DT</td><td>WL-DM+VIMA</td><td>VIMA</td></tr><tr><td>MW</td><td>2.57±0.90</td><td>1.18±0.85</td><td>1.72±0.68</td><td>1.24±0.86</td><td>1.84±0.84</td><td>0.87±0.84</td></tr></table>
390
+
391
+ # D.6 COEFFICIENT SELECTION
392
+
393
+ Table 7: The performance of all methods on MW tasks, we applied the information bottleneck-based loss on different base algorithms and compare their performance.
394
+
395
+ <table><tr><td>α2
396
+ α1</td><td>0.1</td><td>1</td><td>10</td></tr><tr><td>0.1</td><td>0.95±0.77</td><td>2.09±0.69</td><td>2.16±0.71</td></tr><tr><td>0.01</td><td>0.82±0.86</td><td>1.83±0.53</td><td>2.05±0.62</td></tr><tr><td>0.001</td><td>1.03±0.80</td><td>1.79±0.41</td><td>2.57±0.90</td></tr></table>
397
+
398
+ Table 8: The performance of WL-DM with different coefficients on MW tasks.
399
+
400
+ Coefficients for mutual information loss should be adjusted according to the environment. Specifically, we conducted a grid search to select optimal coefficients. Taking experiments in Meta World environment as an example, the performance of different coefficients is shown in Table 8.
2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38df953ef6200c5ac9c25ecaa386b6b9e38156b020648540665588497cb511c3
3
+ size 497682
2025/Watch Less, Do More_ Implicit Skill Discovery for Video-Conditioned Policy/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Watermark Anything With Localized Messages/92c7c1c6-3f70-47a0-9a49-ed1f73de4119_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Watermark Anything With Localized Messages/92c7c1c6-3f70-47a0-9a49-ed1f73de4119_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Watermark Anything With Localized Messages/92c7c1c6-3f70-47a0-9a49-ed1f73de4119_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3d8b6d7cedee6d95b38156456879774aa0aa01d0a6c26f61719e6d100e2d932
3
+ size 27188006
2025/Watermark Anything With Localized Messages/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/Watermark Anything With Localized Messages/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c71d2cb868c9237a392cbdd5608cd6c7744c9efac22afbf300bfc210e9c36387
3
+ size 2045557
2025/Watermark Anything With Localized Messages/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/WavTokenizer_ an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling/1a814df9-7a3f-46c8-af0e-4fdc8338bc15_content_list.json ADDED
The diff for this file is too large to render. See raw diff