Chelsea707 commited on
Commit
d8cbe90
·
verified ·
1 Parent(s): 26807e7

Add Batch cb998d3e-d98f-4cea-add7-67819c4c9a06 data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/a43d6fa2-3ccb-44f7-9c93-8e35d79395ec_content_list.json +0 -0
  3. 2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/a43d6fa2-3ccb-44f7-9c93-8e35d79395ec_model.json +0 -0
  4. 2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/a43d6fa2-3ccb-44f7-9c93-8e35d79395ec_origin.pdf +3 -0
  5. 2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/full.md +560 -0
  6. 2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/images.zip +3 -0
  7. 2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/layout.json +0 -0
  8. 2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/4be298ba-d505-43be-9230-ca3e8692d35e_content_list.json +0 -0
  9. 2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/4be298ba-d505-43be-9230-ca3e8692d35e_model.json +0 -0
  10. 2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/4be298ba-d505-43be-9230-ca3e8692d35e_origin.pdf +3 -0
  11. 2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/full.md +0 -0
  12. 2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/images.zip +3 -0
  13. 2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/layout.json +0 -0
  14. 2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/1210ff35-9465-44cf-b6f5-91533149c936_content_list.json +0 -0
  15. 2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/1210ff35-9465-44cf-b6f5-91533149c936_model.json +0 -0
  16. 2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/1210ff35-9465-44cf-b6f5-91533149c936_origin.pdf +3 -0
  17. 2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/full.md +0 -0
  18. 2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/images.zip +3 -0
  19. 2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/layout.json +0 -0
  20. 2025/Unlocking Point Processes through Point Set Diffusion/e3770118-f325-4609-a8ac-be9346630a7e_content_list.json +0 -0
  21. 2025/Unlocking Point Processes through Point Set Diffusion/e3770118-f325-4609-a8ac-be9346630a7e_model.json +0 -0
  22. 2025/Unlocking Point Processes through Point Set Diffusion/e3770118-f325-4609-a8ac-be9346630a7e_origin.pdf +3 -0
  23. 2025/Unlocking Point Processes through Point Set Diffusion/full.md +595 -0
  24. 2025/Unlocking Point Processes through Point Set Diffusion/images.zip +3 -0
  25. 2025/Unlocking Point Processes through Point Set Diffusion/layout.json +0 -0
  26. 2025/Unlocking the Potential of Model Calibration in Federated Learning/430190dc-b0a8-4e8b-858f-8d860a87a1e9_content_list.json +0 -0
  27. 2025/Unlocking the Potential of Model Calibration in Federated Learning/430190dc-b0a8-4e8b-858f-8d860a87a1e9_model.json +0 -0
  28. 2025/Unlocking the Potential of Model Calibration in Federated Learning/430190dc-b0a8-4e8b-858f-8d860a87a1e9_origin.pdf +3 -0
  29. 2025/Unlocking the Potential of Model Calibration in Federated Learning/full.md +0 -0
  30. 2025/Unlocking the Potential of Model Calibration in Federated Learning/images.zip +3 -0
  31. 2025/Unlocking the Potential of Model Calibration in Federated Learning/layout.json +0 -0
  32. 2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/dc91f6c8-22c0-4dce-99c6-519db6833887_content_list.json +0 -0
  33. 2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/dc91f6c8-22c0-4dce-99c6-519db6833887_model.json +0 -0
  34. 2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/dc91f6c8-22c0-4dce-99c6-519db6833887_origin.pdf +3 -0
  35. 2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/full.md +438 -0
  36. 2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/images.zip +3 -0
  37. 2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/layout.json +0 -0
  38. 2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/c2d25ae0-a049-4a50-b7c8-27c324a76f6b_content_list.json +0 -0
  39. 2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/c2d25ae0-a049-4a50-b7c8-27c324a76f6b_model.json +0 -0
  40. 2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/c2d25ae0-a049-4a50-b7c8-27c324a76f6b_origin.pdf +3 -0
  41. 2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/full.md +477 -0
  42. 2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/images.zip +3 -0
  43. 2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/layout.json +0 -0
  44. 2025/Unsupervised Meta-Learning via In-Context Learning/8959c2e4-7cb5-4574-88e7-fc386607e7ca_content_list.json +0 -0
  45. 2025/Unsupervised Meta-Learning via In-Context Learning/8959c2e4-7cb5-4574-88e7-fc386607e7ca_model.json +0 -0
  46. 2025/Unsupervised Meta-Learning via In-Context Learning/8959c2e4-7cb5-4574-88e7-fc386607e7ca_origin.pdf +3 -0
  47. 2025/Unsupervised Meta-Learning via In-Context Learning/full.md +0 -0
  48. 2025/Unsupervised Meta-Learning via In-Context Learning/images.zip +3 -0
  49. 2025/Unsupervised Meta-Learning via In-Context Learning/layout.json +0 -0
  50. 2025/Unsupervised Model Tree Heritage Recovery/4c7689e4-624c-40ce-8d7a-0e434a5663b9_content_list.json +0 -0
.gitattributes CHANGED
@@ -3216,3 +3216,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
3216
  2025/Unlearning[[:space:]]or[[:space:]]Obfuscating_[[:space:]]Jogging[[:space:]]the[[:space:]]Memory[[:space:]]of[[:space:]]Unlearned[[:space:]]LLMs[[:space:]]via[[:space:]]Benign[[:space:]]Relearning/d2473cd0-0ca2-4652-a5f7-e38817609119_origin.pdf filter=lfs diff=lfs merge=lfs -text
3217
  2025/Unleashing[[:space:]]the[[:space:]]Potential[[:space:]]of[[:space:]]Vision-Language[[:space:]]Pre-Training[[:space:]]for[[:space:]]3D[[:space:]]Zero-Shot[[:space:]]Lesion[[:space:]]Segmentation[[:space:]]via[[:space:]]Mask-Attribute[[:space:]]Alignment/7091d253-fec2-478e-ba2f-a6dc23e7862b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3218
  2025/Unleashing[[:space:]]the[[:space:]]Power[[:space:]]of[[:space:]]Task-Specific[[:space:]]Directions[[:space:]]in[[:space:]]Parameter[[:space:]]Efficient[[:space:]]Fine-tuning/98672c2d-bda0-42ce-9403-aa747fe09cdd_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3216
  2025/Unlearning[[:space:]]or[[:space:]]Obfuscating_[[:space:]]Jogging[[:space:]]the[[:space:]]Memory[[:space:]]of[[:space:]]Unlearned[[:space:]]LLMs[[:space:]]via[[:space:]]Benign[[:space:]]Relearning/d2473cd0-0ca2-4652-a5f7-e38817609119_origin.pdf filter=lfs diff=lfs merge=lfs -text
3217
  2025/Unleashing[[:space:]]the[[:space:]]Potential[[:space:]]of[[:space:]]Vision-Language[[:space:]]Pre-Training[[:space:]]for[[:space:]]3D[[:space:]]Zero-Shot[[:space:]]Lesion[[:space:]]Segmentation[[:space:]]via[[:space:]]Mask-Attribute[[:space:]]Alignment/7091d253-fec2-478e-ba2f-a6dc23e7862b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3218
  2025/Unleashing[[:space:]]the[[:space:]]Power[[:space:]]of[[:space:]]Task-Specific[[:space:]]Directions[[:space:]]in[[:space:]]Parameter[[:space:]]Efficient[[:space:]]Fine-tuning/98672c2d-bda0-42ce-9403-aa747fe09cdd_origin.pdf filter=lfs diff=lfs merge=lfs -text
3219
+ 2025/Unlocking[[:space:]]Efficient,[[:space:]]Scalable,[[:space:]]and[[:space:]]Continual[[:space:]]Knowledge[[:space:]]Editing[[:space:]]with[[:space:]]Basis-Level[[:space:]]Representation[[:space:]]Fine-Tuning/a43d6fa2-3ccb-44f7-9c93-8e35d79395ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
3220
+ 2025/Unlocking[[:space:]]Global[[:space:]]Optimality[[:space:]]in[[:space:]]Bilevel[[:space:]]Optimization_[[:space:]]A[[:space:]]Pilot[[:space:]]Study/4be298ba-d505-43be-9230-ca3e8692d35e_origin.pdf filter=lfs diff=lfs merge=lfs -text
3221
+ 2025/Unlocking[[:space:]]Guidance[[:space:]]for[[:space:]]Discrete[[:space:]]State-Space[[:space:]]Diffusion[[:space:]]and[[:space:]]Flow[[:space:]]Models/1210ff35-9465-44cf-b6f5-91533149c936_origin.pdf filter=lfs diff=lfs merge=lfs -text
3222
+ 2025/Unlocking[[:space:]]Point[[:space:]]Processes[[:space:]]through[[:space:]]Point[[:space:]]Set[[:space:]]Diffusion/e3770118-f325-4609-a8ac-be9346630a7e_origin.pdf filter=lfs diff=lfs merge=lfs -text
3223
+ 2025/Unlocking[[:space:]]the[[:space:]]Potential[[:space:]]of[[:space:]]Model[[:space:]]Calibration[[:space:]]in[[:space:]]Federated[[:space:]]Learning/430190dc-b0a8-4e8b-858f-8d860a87a1e9_origin.pdf filter=lfs diff=lfs merge=lfs -text
3224
+ 2025/Unposed[[:space:]]Sparse[[:space:]]Views[[:space:]]Room[[:space:]]Layout[[:space:]]Reconstruction[[:space:]]in[[:space:]]the[[:space:]]Age[[:space:]]of[[:space:]]Pretrain[[:space:]]Model/dc91f6c8-22c0-4dce-99c6-519db6833887_origin.pdf filter=lfs diff=lfs merge=lfs -text
3225
+ 2025/Unsupervised[[:space:]]Disentanglement[[:space:]]of[[:space:]]Content[[:space:]]and[[:space:]]Style[[:space:]]via[[:space:]]Variance-Invariance[[:space:]]Constraints/c2d25ae0-a049-4a50-b7c8-27c324a76f6b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3226
+ 2025/Unsupervised[[:space:]]Meta-Learning[[:space:]]via[[:space:]]In-Context[[:space:]]Learning/8959c2e4-7cb5-4574-88e7-fc386607e7ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
3227
+ 2025/Unsupervised[[:space:]]Model[[:space:]]Tree[[:space:]]Heritage[[:space:]]Recovery/4c7689e4-624c-40ce-8d7a-0e434a5663b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
3228
+ 2025/Unsupervised[[:space:]]Multiple[[:space:]]Kernel[[:space:]]Learning[[:space:]]for[[:space:]]Graphs[[:space:]]via[[:space:]]Ordinality[[:space:]]Preservation/127720d5-f73e-4be0-856a-b8e67a1aeab6_origin.pdf filter=lfs diff=lfs merge=lfs -text
3229
+ 2025/Unsupervised[[:space:]]Zero-Shot[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]via[[:space:]]Dual-Value[[:space:]]Forward-Backward[[:space:]]Representation/191e9af2-b1ff-4a18-96cd-a597fa6dd738_origin.pdf filter=lfs diff=lfs merge=lfs -text
3230
+ 2025/Unveiling[[:space:]]the[[:space:]]Magic[[:space:]]of[[:space:]]Code[[:space:]]Reasoning[[:space:]]through[[:space:]]Hypothesis[[:space:]]Decomposition[[:space:]]and[[:space:]]Amendment/fa59169e-b0dd-441f-815c-233b7f321cca_origin.pdf filter=lfs diff=lfs merge=lfs -text
3231
+ 2025/Unveiling[[:space:]]the[[:space:]]Secret[[:space:]]Recipe_[[:space:]]A[[:space:]]Guide[[:space:]]For[[:space:]]Supervised[[:space:]]Fine-Tuning[[:space:]]Small[[:space:]]LLMs/92d5a7a0-3c3e-41d5-b777-b554395db936_origin.pdf filter=lfs diff=lfs merge=lfs -text
3232
+ 2025/Utilitarian[[:space:]]Algorithm[[:space:]]Configuration[[:space:]]for[[:space:]]Infinite[[:space:]]Parameter[[:space:]]Spaces/0ed3916a-8601-4f6c-a6b1-9f2a9c6acdd1_origin.pdf filter=lfs diff=lfs merge=lfs -text
3233
+ 2025/Utility-Directed[[:space:]]Conformal[[:space:]]Prediction_[[:space:]]A[[:space:]]Decision-Aware[[:space:]]Framework[[:space:]]for[[:space:]]Actionable[[:space:]]Uncertainty[[:space:]]Quantification/bf92b6b8-4e64-4451-9579-c18346f11c79_origin.pdf filter=lfs diff=lfs merge=lfs -text
3234
+ 2025/VAE-Var_[[:space:]]Variational[[:space:]]Autoencoder-Enhanced[[:space:]]Variational[[:space:]]Methods[[:space:]]for[[:space:]]Data[[:space:]]Assimilation[[:space:]]in[[:space:]]Meteorology/bf98a896-3ef9-4496-8c0e-339678371fb8_origin.pdf filter=lfs diff=lfs merge=lfs -text
3235
+ 2025/VCR_[[:space:]]A[[:space:]]Task[[:space:]]for[[:space:]]Pixel-Level[[:space:]]Complex[[:space:]]Reasoning[[:space:]]in[[:space:]]Vision[[:space:]]Language[[:space:]]Models[[:space:]]via[[:space:]]Restoring[[:space:]]Occluded[[:space:]]Text/d6e57779-f5a0-4877-9a48-1458657713cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
3236
+ 2025/VD3D_[[:space:]]Taming[[:space:]]Large[[:space:]]Video[[:space:]]Diffusion[[:space:]]Transformers[[:space:]]for[[:space:]]3D[[:space:]]Camera[[:space:]]Control/ca417fdb-818d-4319-a43c-0487c2df276f_origin.pdf filter=lfs diff=lfs merge=lfs -text
3237
+ 2025/VEDIT_[[:space:]]Latent[[:space:]]Prediction[[:space:]]Architecture[[:space:]]For[[:space:]]Procedural[[:space:]]Video[[:space:]]Representation[[:space:]]Learning/ed060a33-aa91-4eb8-9386-ede3fcb079ee_origin.pdf filter=lfs diff=lfs merge=lfs -text
3238
+ 2025/VICtoR_[[:space:]]Learning[[:space:]]Hierarchical[[:space:]]Vision-Instruction[[:space:]]Correlation[[:space:]]Rewards[[:space:]]for[[:space:]]Long-horizon[[:space:]]Manipulation/52b9ebcd-1750-49c6-98f5-18d3c5dd49cb_origin.pdf filter=lfs diff=lfs merge=lfs -text
3239
+ 2025/VILA-U_[[:space:]]a[[:space:]]Unified[[:space:]]Foundation[[:space:]]Model[[:space:]]Integrating[[:space:]]Visual[[:space:]]Understanding[[:space:]]and[[:space:]]Generation/6567f693-6498-4485-8d9a-d557c97f0d1f_origin.pdf filter=lfs diff=lfs merge=lfs -text
3240
+ 2025/VL-Cache_[[:space:]]Sparsity[[:space:]]and[[:space:]]Modality-Aware[[:space:]]KV[[:space:]]Cache[[:space:]]Compression[[:space:]]for[[:space:]]Vision-Language[[:space:]]Model[[:space:]]Inference[[:space:]]Acceleration/3264e766-637a-4359-b97f-ce939021c873_origin.pdf filter=lfs diff=lfs merge=lfs -text
3241
+ 2025/VL-ICL[[:space:]]Bench_[[:space:]]The[[:space:]]Devil[[:space:]]in[[:space:]]the[[:space:]]Details[[:space:]]of[[:space:]]Multimodal[[:space:]]In-Context[[:space:]]Learning/558e0f5c-9ce3-4ddc-b96d-f65da8d5ed75_origin.pdf filter=lfs diff=lfs merge=lfs -text
3242
+ 2025/VLAS_[[:space:]]Vision-Language-Action[[:space:]]Model[[:space:]]with[[:space:]]Speech[[:space:]]Instructions[[:space:]]for[[:space:]]Customized[[:space:]]Robot[[:space:]]Manipulation/9921bb53-dfc4-4a4c-9e50-fb50110318f5_origin.pdf filter=lfs diff=lfs merge=lfs -text
3243
+ 2025/VLM2Vec_[[:space:]]Training[[:space:]]Vision-Language[[:space:]]Models[[:space:]]for[[:space:]]Massive[[:space:]]Multimodal[[:space:]]Embedding[[:space:]]Tasks/ac2ee39d-8f98-4923-94f8-82fabf2c180c_origin.pdf filter=lfs diff=lfs merge=lfs -text
3244
+ 2025/VOILA_[[:space:]]Evaluation[[:space:]]of[[:space:]]MLLMs[[:space:]]For[[:space:]]Perceptual[[:space:]]Understanding[[:space:]]and[[:space:]]Analogical[[:space:]]Reasoning/7e222b5e-9e77-4c6e-aaef-5878a086774a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3245
+ 2025/VSTAR_[[:space:]]Generative[[:space:]]Temporal[[:space:]]Nursing[[:space:]]for[[:space:]]Longer[[:space:]]Dynamic[[:space:]]Video[[:space:]]Synthesis/c0e6427a-7096-4a74-962f-e4f7ea87878c_origin.pdf filter=lfs diff=lfs merge=lfs -text
3246
+ 2025/VTDexManip_[[:space:]]A[[:space:]]Dataset[[:space:]]and[[:space:]]Benchmark[[:space:]]for[[:space:]]Visual-tactile[[:space:]]Pretraining[[:space:]]and[[:space:]]Dexterous[[:space:]]Manipulation[[:space:]]with[[:space:]]Reinforcement[[:space:]]Learning/c97ce281-a5d2-4ec3-9f84-b89ed9101e54_origin.pdf filter=lfs diff=lfs merge=lfs -text
3247
+ 2025/VVC-Gym_[[:space:]]A[[:space:]]Fixed-Wing[[:space:]]UAV[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]Environment[[:space:]]for[[:space:]]Multi-Goal[[:space:]]Long-Horizon[[:space:]]Problems/8c5f4a3b-ffe6-440f-8d9e-882e1d80425c_origin.pdf filter=lfs diff=lfs merge=lfs -text
3248
+ 2025/Valid[[:space:]]Conformal[[:space:]]Prediction[[:space:]]for[[:space:]]Dynamic[[:space:]]GNNs/877c4e62-6fa5-4bd6-b3ec-dc3d2e392da4_origin.pdf filter=lfs diff=lfs merge=lfs -text
3249
+ 2025/Value-Incentivized[[:space:]]Preference[[:space:]]Optimization_[[:space:]]A[[:space:]]Unified[[:space:]]Approach[[:space:]]to[[:space:]]Online[[:space:]]and[[:space:]]Offline[[:space:]]RLHF/4134e234-2f6a-46fd-a026-b113ecabd6e4_origin.pdf filter=lfs diff=lfs merge=lfs -text
3250
+ 2025/Value-aligned[[:space:]]Behavior[[:space:]]Cloning[[:space:]]for[[:space:]]Offline[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]via[[:space:]]Bi-level[[:space:]]Optimization/64ef6ca8-8aa7-4aff-bb7f-f8a932a43757_origin.pdf filter=lfs diff=lfs merge=lfs -text
3251
+ 2025/Variance-Reducing[[:space:]]Couplings[[:space:]]for[[:space:]]Random[[:space:]]Features/e71b661c-0416-4ac5-ae7b-56a5f014b50e_origin.pdf filter=lfs diff=lfs merge=lfs -text
3252
+ 2025/Variational[[:space:]]Bayesian[[:space:]]Pseudo-Coreset/5895f5e3-5eb3-4f08-8ff9-b03f8e669118_origin.pdf filter=lfs diff=lfs merge=lfs -text
3253
+ 2025/Variational[[:space:]]Best-of-N[[:space:]]Alignment/e47b6cd6-22ce-445a-9c89-7eebdcfe8c78_origin.pdf filter=lfs diff=lfs merge=lfs -text
3254
+ 2025/Variational[[:space:]]Search[[:space:]]Distributions/296ab63f-b21b-47ca-8327-dbd65f5346bf_origin.pdf filter=lfs diff=lfs merge=lfs -text
3255
+ 2025/Varying[[:space:]]Shades[[:space:]]of[[:space:]]Wrong_[[:space:]]Aligning[[:space:]]LLMs[[:space:]]with[[:space:]]Wrong[[:space:]]Answers[[:space:]]Only/50b26986-f0f6-4a92-8844-d13ff910a54c_origin.pdf filter=lfs diff=lfs merge=lfs -text
3256
+ 2025/Vec2Face_[[:space:]]Scaling[[:space:]]Face[[:space:]]Dataset[[:space:]]Generation[[:space:]]with[[:space:]]Loosely[[:space:]]Constrained[[:space:]]Vectors/51443888-d1c6-4e32-b308-b6c6b641154a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3257
+ 2025/Vector-ICL_[[:space:]]In-context[[:space:]]Learning[[:space:]]with[[:space:]]Continuous[[:space:]]Vector[[:space:]]Representations/7e4bbed7-fa29-418e-b5a5-e95dee87288d_origin.pdf filter=lfs diff=lfs merge=lfs -text
3258
+ 2025/Verifying[[:space:]]Properties[[:space:]]of[[:space:]]Binary[[:space:]]Neural[[:space:]]Networks[[:space:]]Using[[:space:]]Sparse[[:space:]]Polynomial[[:space:]]Optimization/067cfe24-4ca7-4fca-a538-c8b4f8f4761a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3259
+ 2025/Vertical[[:space:]]Federated[[:space:]]Learning[[:space:]]with[[:space:]]Missing[[:space:]]Features[[:space:]]During[[:space:]]Training[[:space:]]and[[:space:]]Inference/660bde81-e9e0-41f3-8f31-ad7d9373c2a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
3260
+ 2025/Vevo_[[:space:]]Controllable[[:space:]]Zero-Shot[[:space:]]Voice[[:space:]]Imitation[[:space:]]with[[:space:]]Self-Supervised[[:space:]]Disentanglement/a68c8cf3-11a5-4249-a328-c74aee091ba3_origin.pdf filter=lfs diff=lfs merge=lfs -text
3261
+ 2025/ViBiDSampler_[[:space:]]Enhancing[[:space:]]Video[[:space:]]Interpolation[[:space:]]Using[[:space:]]Bidirectional[[:space:]]Diffusion[[:space:]]Sampler/4a01bd75-aa1e-41e9-92c1-7df7a25a24ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
3262
+ 2025/ViDiT-Q_[[:space:]]Efficient[[:space:]]and[[:space:]]Accurate[[:space:]]Quantization[[:space:]]of[[:space:]]Diffusion[[:space:]]Transformers[[:space:]]for[[:space:]]Image[[:space:]]and[[:space:]]Video[[:space:]]Generation/1c2e24a6-4087-41df-b39a-f64350248dd4_origin.pdf filter=lfs diff=lfs merge=lfs -text
3263
+ 2025/ViSAGe_[[:space:]]Video-to-Spatial[[:space:]]Audio[[:space:]]Generation/dcab9687-88c6-4ce1-9a3e-9b31ecdd9d1a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3264
+ 2025/VibeCheck_[[:space:]]Discover[[:space:]]and[[:space:]]Quantify[[:space:]]Qualitative[[:space:]]Differences[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/d025d64c-c27f-4a44-a51f-d608035756de_origin.pdf filter=lfs diff=lfs merge=lfs -text
3265
+ 2025/Video[[:space:]]Action[[:space:]]Differencing/03f01489-2927-43a2-906e-7611acf88cf8_origin.pdf filter=lfs diff=lfs merge=lfs -text
3266
+ 2025/Video[[:space:]]In-context[[:space:]]Learning_[[:space:]]Autoregressive[[:space:]]Transformers[[:space:]]are[[:space:]]Zero-Shot[[:space:]]Video[[:space:]]Imitators/4ec42cc0-8f3f-4624-917e-335e2d18c6ab_origin.pdf filter=lfs diff=lfs merge=lfs -text
3267
+ 2025/Video-STaR_[[:space:]]Self-Training[[:space:]]Enables[[:space:]]Video[[:space:]]Instruction[[:space:]]Tuning[[:space:]]with[[:space:]]Any[[:space:]]Supervision/65e8a479-6656-46b6-b146-9972f9a0db7b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3268
+ 2025/VideoGrain_[[:space:]]Modulating[[:space:]]Space-Time[[:space:]]Attention[[:space:]]for[[:space:]]Multi-Grained[[:space:]]Video[[:space:]]Editing/2d071d51-6365-44d4-a920-c9ba18a2f583_origin.pdf filter=lfs diff=lfs merge=lfs -text
3269
+ 2025/VideoPhy_[[:space:]]Evaluating[[:space:]]Physical[[:space:]]Commonsense[[:space:]]for[[:space:]]Video[[:space:]]Generation/9dd791ae-d8cf-4e68-8661-cbc482a50c40_origin.pdf filter=lfs diff=lfs merge=lfs -text
3270
+ 2025/VideoShield_[[:space:]]Regulating[[:space:]]Diffusion-based[[:space:]]Video[[:space:]]Generation[[:space:]]Models[[:space:]]via[[:space:]]Watermarking/4352ba72-4d3d-4a3c-85c4-cb89e113c6c0_origin.pdf filter=lfs diff=lfs merge=lfs -text
3271
+ 2025/VideoWebArena_[[:space:]]Evaluating[[:space:]]Long[[:space:]]Context[[:space:]]Multimodal[[:space:]]Agents[[:space:]]with[[:space:]]Video[[:space:]]Understanding[[:space:]]Web[[:space:]]Tasks/9eaca44b-ead4-4ed7-8a6f-c9547413a25a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3272
+ 2025/VisRAG_[[:space:]]Vision-based[[:space:]]Retrieval-augmented[[:space:]]Generation[[:space:]]on[[:space:]]Multi-modality[[:space:]]Documents/f6b04a8c-985d-45f4-bb64-4cab975a63a0_origin.pdf filter=lfs diff=lfs merge=lfs -text
3273
+ 2025/Vision[[:space:]]CNNs[[:space:]]trained[[:space:]]to[[:space:]]estimate[[:space:]]spatial[[:space:]]latents[[:space:]]learned[[:space:]]similar[[:space:]]ventral-stream-aligned[[:space:]]representations/a9fb41d2-8bfd-402d-b073-13fbd12b7784_origin.pdf filter=lfs diff=lfs merge=lfs -text
3274
+ 2025/Vision[[:space:]]and[[:space:]]Language[[:space:]]Synergy[[:space:]]for[[:space:]]Rehearsal[[:space:]]Free[[:space:]]Continual[[:space:]]Learning/bc252637-0ef2-47f4-8dce-83bb91d2979a_origin.pdf filter=lfs diff=lfs merge=lfs -text
3275
+ 2025/Vision-LSTM_[[:space:]]xLSTM[[:space:]]as[[:space:]]Generic[[:space:]]Vision[[:space:]]Backbone/3800f4fa-148c-456f-8c4f-1b293c21e13c_origin.pdf filter=lfs diff=lfs merge=lfs -text
3276
+ 2025/Visual[[:space:]]Agents[[:space:]]as[[:space:]]Fast[[:space:]]and[[:space:]]Slow[[:space:]]Thinkers/f28218d5-a66f-4a9a-be9a-0a0978966ae1_origin.pdf filter=lfs diff=lfs merge=lfs -text
3277
+ 2025/Visual[[:space:]]Description[[:space:]]Grounding[[:space:]]Reduces[[:space:]]Hallucinations[[:space:]]and[[:space:]]Boosts[[:space:]]Reasoning[[:space:]]in[[:space:]]LVLMs/19a9955d-531d-4ab4-abe8-b0df95a7856d_origin.pdf filter=lfs diff=lfs merge=lfs -text
3278
+ 2025/Visual[[:space:]]Haystacks_[[:space:]]A[[:space:]]Vision-Centric[[:space:]]Needle-In-A-Haystack[[:space:]]Benchmark/8a61eff8-d7dc-4b23-a374-1d9a13b33e46_origin.pdf filter=lfs diff=lfs merge=lfs -text
3279
+ 2025/Visual-O1_[[:space:]]Understanding[[:space:]]Ambiguous[[:space:]]Instructions[[:space:]]via[[:space:]]Multi-modal[[:space:]]Multi-turn[[:space:]]Chain-of-thoughts[[:space:]]Reasoning/79c5e575-8476-40fb-a0be-404c2d161fe2_origin.pdf filter=lfs diff=lfs merge=lfs -text
3280
+ 2025/VisualAgentBench_[[:space:]]Towards[[:space:]]Large[[:space:]]Multimodal[[:space:]]Models[[:space:]]as[[:space:]]Visual[[:space:]]Foundation[[:space:]]Agents/676fdb11-f643-48dd-851f-7d6f17f25fbd_origin.pdf filter=lfs diff=lfs merge=lfs -text
3281
+ 2025/Visually[[:space:]]Consistent[[:space:]]Hierarchical[[:space:]]Image[[:space:]]Classification/99e7c309-af80-474a-8c59-e95455257282_origin.pdf filter=lfs diff=lfs merge=lfs -text
3282
+ 2025/Visually[[:space:]]Guided[[:space:]]Decoding_[[:space:]]Gradient-Free[[:space:]]Hard[[:space:]]Prompt[[:space:]]Inversion[[:space:]]with[[:space:]]Language[[:space:]]Models/c14ce079-f199-4061-98e1-b941ffc36cbf_origin.pdf filter=lfs diff=lfs merge=lfs -text
2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/a43d6fa2-3ccb-44f7-9c93-8e35d79395ec_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/a43d6fa2-3ccb-44f7-9c93-8e35d79395ec_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/a43d6fa2-3ccb-44f7-9c93-8e35d79395ec_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce08942bf4c1eff7c141ef1f97ff69fa8c29f4fabd5e5181e33d54ab05f82384
3
+ size 1129699
2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/full.md ADDED
@@ -0,0 +1,560 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # UNLOCKING EFFICIENT, SCALABLE, AND CONTINUAL KNOWLEDGE EDITING WITH BASIS-LEVEL REPRESENTATION FINE-TUNING
2
+
3
+ Tianci Liu $^{1}$ , Ruirui Li $^{2}$ , Yunzhe Qi $^{3}$ , Hui Liu $^{2}$ , Xianfeng Tang $^{2}$ , Tianqi Zheng $^{2}$ , Qingyu Yin $^{2}$ , Monica Cheng $^{2}$ , Jun Huan $^{4}$ , Haoyu Wang $^{5}$ , Jing Gao $^{1}$
4
+
5
+ $^{1}$ Purdue University $^{2}$ Amazon $^{3}$ UIUC $^{4}$ AWS AI Lab $^{5}$ SUNY Albany
6
+
7
+ 1{liu3351, jinggao}@purdue.edu 2ruirul@amazon.com 5hwang28@albany.edu
8
+
9
+ # ABSTRACT
10
+
11
+ Large language models (LLMs) have achieved remarkable performance on various natural language tasks. However, they are trained on static corpora and their knowledge can become outdated quickly in the fast-changing world. This motivates the development of knowledge editing methods designed to update certain knowledge in LLMs without changing unrelated others. To make selective edits, previous efforts often sought to update a small amount of parameters in some specific layer(s) of a LLM. Nonetheless, in challenging scenarios, they still fall short in making successful edits while preserving knowledge irrelevant to the updates simultaneously, resulting in a notable editing-locality trade-off. In this work, we question if the trade-offs are caused by the fact that parameter-based updates have a global effect, i.e., edited parameters affect all inputs indiscriminately. In light of this, we explore the feasibility of representation fine-tuning, which applied some linear update to a few representations in a learned subspace, for knowledge editing. While being effective to enhance an LLM's general ability as demonstrated in the previous work, we theoretically show that this linear update imposes a tension in editing-locality trade-off. Subsequently, BaFT is proposed to break the linearity. BaFT computes a weight for each basis that spans a dimension of the subspace based on the input representation. This input-dependent weighting mechanism allows BaFT to manage different types of knowledge in an adaptive way, thereby achieving a better editing-locality trade-off. Experiments on three LLMs with five editing benchmarks in diverse scenarios show the superiority of our method.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Language models (LMs) parameterized by deep neural networks (Vaswani et al., 2017; Lewis et al., 2019; Radford et al., 2019; Brown et al., 2020) have thrived in producing fluent and meaningful texts on diverse natural language generation and classification tasks (See et al., 2019; Raffel et al., 2020; Ji et al., 2023). These successes underscore the versatility of LMs, establishing them as the foundations for different natural language processing applications (Bommasani et al., 2021; Zhou et al., 2023). Additionally, with model sizes continually increasing, large language models (LLMs) have demonstrated unprecedented abilities to follow natural language instructions (Dong et al., 2022b; Ouyang et al., 2022), empowering zero-shot adaptations to unseen tasks (Kojima et al., 2022), and paving the way towards artificial general intelligence (Bubeck et al., 2023).
16
+
17
+ Despite their remarkable performance, the real-world deployment of LLMs remains largely unresolved. While LLMs can understand a wide range of contexts, they can only provide feedback based on the static knowledge from the data on which they were trained. In a fast-changing world, most knowledge quickly becomes outdated. This could amplify critical issues such as making factual fallacy (De Cao et al., 2021) or producing harmful generations (Hartvigsen et al., 2022).
18
+
19
+ As a remedy, knowledge editing, whose goal is to update an LLM with some specific new knowledge without hurting irrelevant knowledge, has been proposed (Wang et al., 2023; Zhang et al., 2024b). Early effort of full fine-tuning proved ineffective as it also disrupted irrelevant knowledge (Wang
20
+
21
+ et al., 2023), leading to an editing-locality trade-off. Here locality refers to the ability to maintain the knowledge that is irrelevant to the updates. To achieve a good locality, the model update needs to be selective and should rely on a small number of parameters (Wang et al., 2023), and thus parameter-efficient fine-tuning (PEFT) methods like AdaLoRA (Zhang et al., 2023) have shown good performance (Wu et al., 2023). On the other hand, Huang et al. (2023); Dong et al. (2022a) restricted updates to specific feed-forward network (FFN) layer that served for knowledge storing (Dai et al., 2021). Meng et al. (2022a;b) refined the process through a locate-and-edit paradigm which involves an additional locating stage to identify which layer the target knowledge is stored. Nonetheless, these methods still exhibit a certain editing-locality trade-off, regardless of whether locating is performed. We note that these methods are parameter-based and have a global effect, i.e., the edited parameters affect all inputs indiscriminately. This observation challenges to what extent an editing can truly benefit from the targeted effort to identify "better" parameters that "memorize" certain knowledge (Hase et al., 2024). In other words, it is an open question if such trade-offs are due to the coarse control of global parameter-based updates.
22
+
23
+ This paper, following Hernandez et al. (2023) that modifies LLM knowledge by updating representations, explores selective representation-based knowledge editing, and paves a way for an affirmative answer to the above question. Our work is built upon ReFT (Wu et al., 2024) that fine-tunes a few representations in a low-rank linear subspace, and performs on par with PEFT methods such as LoRA family (Hu et al., 2021; Zhang et al., 2023; Ding et al., 2023) and others (Houlsby et al., 2019; Chen et al., 2024). Unlike parameter-based updates that apply to all inputs, ReFT only alters representations at some locations. Consequently, ReFT can achieve a better editing-locality trade-off than parameter-based updates. Notwithstanding, in spite of this promising result, the subspace-level linearity still restricts ReFT from providing precise enough updates for knowledge editing.
24
+
25
+ Specifically, ReFT applies the linear update in the subspace for all selected representations. While being effective to enhance an LLM's general ability such as commonsense reasoning (Wu et al., 2024), this subspace-level control can be too coarse for knowledge editing. As a consequence, when ReFT achieves high editing performance, certain unrelated knowledge may be modified incorrectly, provably jeopardizing its locality. This insight is formalized in Sec 2, where a theoretical analysis on this inherent tension is derived, based on two reasonable assumptions on how representations convey different knowledge. Notably, our analysis reveals an intrinsic limitation of linear representation fine-tuning. It not only holds for knowledge editing, but also applies to other tasks that require selective updates such as continual learning and machine unlearning, and can be of independent interest to these communities. This theoretical result is one of the main contributions of this paper.
26
+
27
+ In light of this insight, we derive BaFT, a more precise representation fine-tuning method for knowledge editing. Noting that the subspace is spanned by a group of bases vectors, BaFT instead learns a basis-level update. This involves computing a weight for each basis for a given representation, then learning a linear update along this basis. Since each basis spans a rank-1 subspace, BaFT is a generalization of ReFT, in the sense that if all bases use the same constant weight 1, BaFT reduces to ReFT. By using different weights combinations on distinct types of knowledge, BaFT can manage them in a more adaptive way. When auxiliary locality information (e.g., what knowledge should not be updated) is available, BaFT can freely restrict the impact of unimportant bases only, while ReFT needs to regulate the whole subspace rigidly. This flexibility makes BaFT highly suitable for knowledge editing and performs on par with the strongest baseline that relies on external memories to memorize new knowledge and requires 10-20 times more parameters. In conclusion, BaFT, as a new representation fine-tuning method, successfully reaches a better editing-locality trade-off while maintaining the parameter efficiency of ReFT. This is another main contribution of this work.
28
+
29
+ Our paper is organized as follows. Sec 2 details the proposed BaFT. Extensive experimental results in Sec 3 demonstrate the superiority of our method for conducting knowledge editing at much better parameter efficiency than existing methods. In the remaining part of this paper, we review related works in Sec 4, and conclude the paper in Sec 5.
30
+
31
+ # 2 PROPOSED METHOD
32
+
33
+ Grounded in a theoretical analysis, we show that the linearity nature of existing representation fine-tuning method induces an inherent limitation on its editing-locality trade-off. We then propose BaFT towards a fine-grained controlled representation fine-tuning in accordance with knowledge editing.
34
+
35
+ # 2.1 PRELIMINARIES
36
+
37
+ Given input $\pmb{x} = (x_{1},\dots,x_{n})$ , where each $x_{i}\in \mathcal{V}$ is a token from vocabulary $\mathcal{V}$ , a language model (LM) parameterized by $\theta$ assigns probability $p_{\theta}(\pmb {x})$ using the chain rule (Bengio et al., 2000):
38
+
39
+ $$
40
+ p _ {\theta} (\boldsymbol {x}) = \prod_ {i = 1} ^ {n} p _ {\theta} \left(x _ {i} \mid x _ {1}, \dots , x _ {i - 1}\right) \triangleq \prod_ {i = 1} ^ {n} p _ {\theta} \left(x _ {i} \mid \boldsymbol {x} _ {< i}\right),
41
+ $$
42
+
43
+ where $p_{\theta}(x_i \mid x_{<i})$ is the predicted distribution of the next token $x_i$ over $\mathcal{V}$ given previous $x_{<i}$ . In specific, for an $L$ -layer LM, let $h_i^{(l)}$ denote the intermediate representation of the $i$ -th token at the $l$ -th layer. The predicted distribution is given by softmax regression parameterized by $\mathbf{W}$ at layer $L$ :
44
+
45
+ $$
46
+ p _ {\theta} \left(x _ {i} \mid \boldsymbol {x} _ {< i}\right) = \operatorname {s o f t m a x} \left(\mathbf {W h} _ {i} ^ {(L)}\right).
47
+ $$
48
+
49
+ To generate a sentence $x$ , the LM repeatedly computes $p_{\theta}(x_i \mid x_{<i})$ and draws $x_i$ from it; then $x_i$ is fed back into the LM as part of the inputs for future steps. The generation process completes if a special token that marks the end of the sentence is returned, or the maximum length is reached.
50
+
51
+ Knowledge Editing aims to incorporate new provided knowledge into a pre-trained LM while preserving other existing knowledge that shouldn't be modified. Formally speaking, any knowledge can be represented in natural language with a textual pair $(x, y)$ , where $x$ entails some subject and relation, and $y$ refers to the corresponding object. For instance, given $x$ being The current president of United States is, $y$ can be Joe Biden. Knowledge editing seeks to maximize the chance of an LLM responding with $y$ given $x$ , while satisfying the following additional criteria at the same time (Zhang et al., 2024b; Liu et al., 2025a): (1) Generality: there are different ways to express US president, wherefore the edited model should generalize. (2) Portability: relevant knowledge such as the first lady of United States should be updated as well. (3) Locality: irrelevant knowledge such as the prime minister of United Kingdom should not be affected. Notably, such requirements of modifying only specific internal knowledge in a LM has been proved challenging. As revealed in previous works (Zhang et al., 2024b), this process should update only a minimal amount of parameters.
52
+
53
+ Representation Fine-tuning (ReFT), proposed by Wu et al. (2024), is a recent parameter-efficient fine-tuning (PEFT) method that outperformed other approaches such as LoRA in updating pretrained LM on several tasks with much less parameters. Building upon the so-called linear representation hypothesis (Park et al., 2023) which presumes that concepts are encoded in linear subspace of representations, ReFT learns low-rank linear updates on representations. In particular, to update the $d$ -dimensional representation $\pmb{h}_i^{(l)}$ at layer $l$ for the $i$ -th token, ReFT learns
54
+
55
+ $$
56
+ \Phi_ {l} \left(\boldsymbol {h} _ {i} ^ {(l)}; \phi_ {l}\right) = \boldsymbol {h} _ {i} ^ {(l)} + \mathbf {R} _ {l} ^ {\top} \left(\mathbf {A} _ {l} \boldsymbol {h} _ {i} ^ {(l)} + \boldsymbol {b} _ {l} - \mathbf {R} _ {l} \boldsymbol {h} _ {i} ^ {(l)}\right), \tag {1}
57
+ $$
58
+
59
+ where $\phi_{l} = (\mathbf{R}_{l},\mathbf{A}_{l},\boldsymbol{b}_{l})$ are learnable parameters added to layer $l$ . Here $\mathbf{R}_l\in \mathbb{R}^{r\times d}$ is a low-rank matrix (i.e., $r\ll d$ ) containing mutually orthogonal rows that specifies a subspace to make the update, and $(\mathbf{A}_l,\pmb {b}_l)$ predicts the updated representation in this subspace. Finally, ReFT requires hyper-parameter $I\subset [n]$ to specify which locations need updates. Put together, ReFT intervenes the layer $l$ 's output by
60
+
61
+ $$
62
+ \boldsymbol {h} _ {i} ^ {(l)} \leftarrow \left(\Phi_ {l} \left(\boldsymbol {h} _ {i} ^ {(l)}\right) \text {i f} i \in I \text {e l s e} \boldsymbol {h} _ {i} ^ {(l)}\right) _ {i \in 1, \dots , n}.
63
+ $$
64
+
65
+ From now on, we omit indices $i,l$ when discussing how a representation $\pmb{h}$ is intervened for brevity.
66
+
67
+ # 2.2 EDITING KNOWLEDGE BY FINE-TUNING REPRESENTATIONS
68
+
69
+ ReFT has demonstrated impressive performance on tasks such as commonsense reasoning that largely rely on an LLM's ability to understand and generate text by updating just a few (i.e., those in $I$ ) representations. However, it is unknown whether this lightweight approach can benefit knowledge editing, which requires modifying some selective internal knowledge. Here, we show that the linearity nature of ReFT limits its editing and locality performance. In specific, for all inputs, ReFT applies the same linear update without distinction:
70
+
71
+ $$
72
+ \Phi (\boldsymbol {h}) = \boldsymbol {h} + \mathbf {R} ^ {\top} (\mathbf {A} \boldsymbol {h} + \boldsymbol {b} - \mathbf {R} \boldsymbol {h}) = \underbrace {(\mathbf {I} + \mathbf {R} ^ {\top} (\mathbf {A} - \mathbf {R}))} _ {\text {w e i g h t}} \boldsymbol {h} + \underbrace {\mathbf {R} ^ {\top} \boldsymbol {b}} _ {\text {b i a s}}.
73
+ $$
74
+
75
+ The coarse control from the linear ReFT makes it less suitable for knowledge editing for two reasons.
76
+
77
+ First, ReFT uses its learned subspace for editing in a predetermined manner, regardless of varying levels of learning difficulty for different types of knowledge. This can lead to suboptimal performance. As an evidence, we fit a rank-12 subspace for ReFT and checked how many dimensions (bases) contribute negligible updates, as a measure of dimension redundancy. To this end, we count for each dimension, if its update magnitude is less than $M$ times of the maximal dimension. Fig 1 shows these results. We noted that the dimension redundancy indeed varies on different types of knowledge.
78
+
79
+ Second, the linearity of ReFT leads to an inherent editing-locality trade-off: it is challenging to maintain good general- ity and locality at the same time. Formally, given some knowledge involves subject $s$ , relation $r$ , and object $o$ that can be updated by ReFT, we make the following assumptions.
80
+
81
+ ![](images/b9131a64f1f240a12e1a70f0fe6850ea6976af0cf8acfad0bb15e89a7d3036ec.jpg)
82
+ Figure 1: Averaged (w/ max-min range) number of redundant dimensions (which have update $M$ times smaller than maximal values), in a rank-12 ReFT update.
83
+
84
+ Assumption 2.1. Let text $x$ encodes $s, r$ . Since the knowledge can be edited by ReFT, text $y$ generated by the LM will convey $o$ if its intermediate representation takes some targeted value $t$ .
85
+
86
+ Assumption 2.2. (Hartvigsen et al., 2024) For any $\pmb{h}$ carrying some knowledge, there exists a positive $\varepsilon(\pmb{h})$ -radius $\ell_2$ ball $B(\pmb{h}, \varepsilon(\pmb{h}))$ around $\pmb{h}$ such that any $\pmb{h}' \in B(\pmb{h}, \varepsilon(\pmb{h}))$ conveys the same knowledge, we refer to $B(\pmb{h}, \varepsilon(\pmb{h}))$ as a stable-ball of $\pmb{h}$ .
87
+
88
+ We provide a few clarifications on the two assumptions. The first assumes that a piece of knowledge can be generated (retrieved) from some associated representation. The second, as in Hartvigsen et al. (2024), assumes that the knowledge is locally stable around its representation, so that a small perturbation won't change the carried knowledge. Under these two assumptions, The following Thm 2.3 reveals a tension between maintaining good generality and locality simultaneously, with its proof deferred to App B.1.
89
+
90
+ Theorem 2.3. When fine-tuning an LM, ReFT learns to update the old representation $\pmb{h}_0$ to targeted $\pmb{t} = \Phi(\pmb{h}_0)$ . If ReFT maintains good generality such that $\forall \pmb{h} \in B(\pmb{h}_0, \varepsilon(\pmb{h}_0))$ ,
91
+
92
+ $$
93
+ \left\| \Phi (\boldsymbol {h}) - \Phi (\boldsymbol {h} _ {0}) \right\| = \left\| \Phi (\boldsymbol {h}) - \boldsymbol {t} \right\| < \varepsilon (\boldsymbol {t}),
94
+ $$
95
+
96
+ where $\| \cdot \|$ denote the $\ell_2$ norm. Then for any irrelevant input $h_{ir}$ with a small stable-ball radius
97
+
98
+ $$
99
+ \varepsilon \left(\boldsymbol {h} _ {i r}\right) < \frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {t}) + \varepsilon \left(\boldsymbol {h} _ {0}\right))}{\varepsilon (\boldsymbol {t}) + 2 \varepsilon \left(\boldsymbol {h} _ {0}\right)} \varepsilon \left(\boldsymbol {h} _ {0}\right),
100
+ $$
101
+
102
+ and is not too far from $h_0$ such that
103
+
104
+ $$
105
+ \left\| \boldsymbol {h} _ {i r} - \boldsymbol {h} _ {0} \right\| = \varepsilon \left(\boldsymbol {h} _ {i r}\right) + \varepsilon \left(\boldsymbol {h} _ {0}\right),
106
+ $$
107
+
108
+ ReFT will output $\Phi (\pmb{h}_{ir})\notin B(\pmb{h}_{ir},\varepsilon (\pmb{h}_{ir}))$ and break its locality guarantee.
109
+
110
+ Intuitively speaking, Thm 2.3 formalizes that ReFT update has to be large enough to make successful edit; and smooth enough to achieve good generality. Then, due to its linearity, it will inevitably hurt the locality of some irrelevant knowledge. This limitation does not rely on the specific $r$ (i.e., subspace rank) being used. In summary, ReFT is less suitable for knowledge editing because of the two limitations, which motivates BaFT as presented in the next section.
111
+
112
+ # 2.3 BAFT: BASIS-LEVEL REPRESENTATION FINE-TUNING
113
+
114
+ Given the two limitations from linearity, i.e. using the whole linear subspace to update all representation without distinction, and the finding of dimension (basis) redundancy, we propose to take the importance of each dimension into account. Since in ReFT, the subspace is parameterized by a set of orthogonal bases vectors, we assign each basis a learnable weight to determine how much it contributes to the current editing. This input-dependent weighting mechanism makes our method applies a non-linear update. We dub our method basis-level representation fine-tuning (BaFT).
115
+
116
+ To be more specific, at a layer where ReFT takes place, we learn an $r$ -dimensional update by
117
+
118
+ $$
119
+ \Phi (\boldsymbol {h}) = \boldsymbol {h} + \sum_ {k = 1} ^ {r} w _ {k} (\boldsymbol {h}) \boldsymbol {r} _ {k} \left(\boldsymbol {a} _ {k} ^ {\top} \boldsymbol {h} + b _ {k} - \boldsymbol {r} _ {k} ^ {\top} \boldsymbol {h}\right), \tag {2}
120
+ $$
121
+
122
+ where $r_1, \ldots, r_r$ are $r$ $d$ -dimensional orthogonal bases, $a_1, \ldots, r_r$ and $b_1, \ldots, b_r$ are $r$ arbitrary vectors and scalars, respectively. Finally, $w_k(h) \in [0,1]$ are $r$ learnable weights. Put together, $w_k(h)(a_k^\top h + b_k - r_k^\top h)$ predicts the magnitude of update along direction of basis $r_k$ , and BaFT combines $r$ total updates to form the final intervention. Fig 4 illustrates the overall flow of BaFT.
123
+
124
+ While appears distinct, Lem 2.4 shows that BaFT generalizes ReFT. See its proof in App B.3.
125
+
126
+ Lemma 2.4. Let $\mathbf{R} = [r_1; \ldots; r_r] \in \mathbb{R}^{r \times d}$ , $\mathbf{A} = [a_1, \ldots, a_r] \in \mathbb{R}^{r \times d}$ , $\pmb{b}^{\top} = (b_1, \ldots, b_k)$ , and $\mathbf{W}(\pmb{h}) = \text{diag}(w_1(\pmb{h}), \ldots, w_r(\pmb{h}))$ be a diagonal matrix. BaFT in Eqn (2) can be expressed as
127
+
128
+ $$
129
+ \Phi (\boldsymbol {h}) = \boldsymbol {h} + \mathbf {R} ^ {\top} \mathbf {W} (\boldsymbol {h}) (\mathbf {A} \boldsymbol {h} + \boldsymbol {b} - \mathbf {R} \boldsymbol {h}). \tag {3}
130
+ $$
131
+
132
+ When using constant weighting $\mathbf{W}(\pmb {h}) = \mathbf{I}$ , BaFT reduces to ReFT.
133
+
134
+ # 2.4 TRAINING OBJECTIVE OF BAFT
135
+
136
+ We end this section by detailing the training of BaFT. For consistency we use $\phi_l$ to denote the collection of learnable parameters at layer $l$ : $\mathbf{R}$ , $\mathbf{A}$ , $\mathbf{b}$ , and newly introduced parameters in $\mathbf{W}$ . Given a set of pre-specified layers $C_l$ that need interventions, we optimize the collection of all learnable parameters $\phi = \{\phi_l\}_{l \in C_l}$ using the following losses.
137
+
138
+ Teacher-forcing Loss. Following Wu et al. (2024), we train BaFT with a language modeling objective, and minimize the cross-entropy loss with teacher-forcing (Lamb et al., 2016) at output positions
139
+
140
+ $$
141
+ L _ {1} (\phi) \triangleq - \sum_ {i = 1} ^ {m} \log p _ {\theta} \left(y _ {i} \mid x y _ {< i}; \phi\right),
142
+ $$
143
+
144
+ where the intervention is applied to the last $P$ positions in $\pmb{x}$ , together with all entries in $\pmb{y}$ .
145
+
146
+ Incremental Load Balancing Loss. When editing multiple pieces of knowledge, different bases need, on average, balanced weights. Otherwise, using a few fixed bases for all edits is equivalent to using a fixed subspace spanned by these bases, and BaFT will reduce to a smaller ReFT. To avoid this reduction, inspired by the sparse mixture of expert (Shazeer et al., 2017; Fedus et al., 2022), we regularize the squared coefficient of variation of $(w_{1}(\pmb{h}),\dots,w_{r}(\pmb{h}))$ . However, as new knowledge may emerge one by one, making direct average over multiple samples infeasible, we compute the metric in an incremental way. Namely, when editing the $t$ -th knowledge, we minimize
147
+
148
+ $$
149
+ \mathcal {R} _ {\mathrm {b a l}} (\phi) \triangleq \sum_ {k = 1} ^ {r} \frac {(\bar {w} _ {k} (t) - \bar {w} (t)) ^ {2}}{(r - 1) \bar {w} (t)}, \quad \text {w h e r e} \bar {w} (t) = \frac {1}{r} \sum_ {k = 1} ^ {r} \bar {w} _ {k} (t),
150
+ $$
151
+
152
+ and $\bar{w}_k(t)$ averages weights $w_k$ over the current and past training samples at selected positions. For incremental optimization, we only minimize $\mathcal{R}_{\mathrm{bal}}(\phi)$ with respect to the current weight on the $t$ -th knowledge, as highlighted by expressing $\bar{w}_k(t)$ as a function of current step $t$ .
153
+
154
+ Locality Regularization. In some scenarios, it is feasible to obtain examples of irrelevant knowledge during training (Wang et al., 2024d; Yu et al., 2024). Such information can benefit the training of BaFT as well. Following Wang et al. (2024d), we incorporate the margin loss as a regularizer. Let $h$ and $h_{\mathrm{ir}}$ denote the representations of editing and irrelevant knowledge, respectively, we minimize
155
+
156
+ $$
157
+ \mathcal {R} _ {\mathrm {l o c}} (\phi) = \underbrace {\max (0 , \mathbf {W} (h _ {\mathrm {i r}}) - \alpha)} _ {\text {i r r . w e i g h t} w (h _ {\mathrm {i r}}) \leq \alpha} + \underbrace {\max (0 , \beta - \mathbf {W} (h))} _ {\text {e d i t . w e i g h t} w (h) t \geq \beta} + \underbrace {\max (0 , \gamma - (\mathbf {W} (h) _ {\max } - \mathbf {W} (h _ {\mathrm {i r}}) _ {\max })} _ {\text {e d i t w e i g h t} \geq \operatorname {l o c} \operatorname {w e i g h t}}.
158
+ $$
159
+
160
+ At a colloquial level, $\mathcal{R}_{\mathrm{loc}}(\phi)$ encourages that weights for irrelevant knowledge should be as small as $\alpha$ , editing knowledge's weight should be no less as $\beta$ , and at the same time, the most important weights from the two groups should have a gap that is as large as $\gamma$ .
161
+
162
+ In execution, we rescale the three terms to the same magnitude and solve the following objective
163
+
164
+ $$
165
+ \min _ {\phi} L (\phi) \triangleq \min _ {\phi} L _ {1} (\phi) + \mathcal {R} _ {\text {b a l}} (\phi) + \mathcal {R} _ {\text {l o c}} (\phi). \tag {4}
166
+ $$
167
+
168
+ ReFT, as a special case of BaFT, only minimizes $L_{1}(\phi)$ .
169
+
170
+ # 3 EXPERIMENT
171
+
172
+ We test the proposed BaFT for knowledge editing on three 7B-level autoregressive language models (LMs) over five public benchmarks. Ablation studies are also conducted. Experiment results show that BaFT can achieve excellent performance at much better parameter efficiency.
173
+
174
+ # 3.1 EXPERIMENT SETUP
175
+
176
+ Base Models. We conduct experiments on three representative LLMs from different model families. LLaMA 2-7b (and LLaMA 2-7b-Chat) (Touvron et al., 2023) have been widely studied in the literature (Zhang et al., 2024b; Wang et al., 2024d) and we follow this convention. Trending LLaMA 3-8b-Instruct (Dubey et al., 2024) and Gemma 1.1-7b-Instruct (Team et al., 2024) are also studied. From now on, we refer to the three LLMs as LLaMA 2(-chat), LLaMA 3, and Gemma for brevity.
177
+
178
+ Tasks. Following previous works (Wang et al., 2023; Zhang et al., 2024b), we edit different kinds of knowledge: WikiData<sup>recent</sup>, WikiData<sup>counterfact</sup> (Cohen et al., 2024), WikiBio (Hartvigsen et al., 2024), ConvSent (Mitchell et al., 2022), and ZsRE (Yao et al., 2023). Due to page limitation, we refer readers to Zhang et al. (2024b) for more benchmark details. When editing an LLM, three scenarios are considered. Single Editing updates one piece of knowledge at a time. Continual Editing and Batched Editing, on the other hand, update multiple pieces of knowledge in a sequential or batched way. The two latter are more challenging due to potential forgetting and knowledge conflicting problems, as observed in the literature (Hartvigsen et al., 2024; Wang et al., 2024d).
179
+
180
+ Baselines. We follow Zhang et al. (2024b); Wang et al. (2024e) and choose AdaLoRA (Zhang et al., 2023), ROME and FT-L (Meng et al., 2022a), and MEMIT (Meng et al., 2022b) as baselines. In continual editing scenarios, we further include representative memory-based methods GRACE (Hartvigsen et al., 2024), MELO (Yu et al., 2024), and WISE (Wang et al., 2024d). All these baselines, same as ours, do not require a larges-scale hard-to-access training data, or training additional models: AdaLoRA learns a low-rank update for model parameters on the new knowledge while keeping less important parameters unchanged, thereby achieving a highly efficient and precise PEFT. ROME applies a causal-tracing analysis to identify the layer wherein the knowledge is stored and then solves an analytic rank-one update. FT-L, on the other hand, directly finetunes the layer identified by ROME with an additional KL divergence loss. MEMIT extends ROME to a batched editing setting by identifying a series of layers to edit and finding the updates as least squares solutions. GRACE, MELO, and WISE are specialized for continual editing. They leverage side parameters to save new knowledge and learn gating mechanism to determine whether pre-trained or new knowledge should be used during inference. Finally, we include ReFT as a baseline that uses a subspace of the same rank as BaFT.
181
+
182
+ Evaluation Criteria. We evaluate the performance from multiple aspects (Zhang et al., 2024b; Wang et al., 2024d). Given an edited model, reliability (Rel.) evaluates whether it successfully learns the new knowledge; generality (Gen.) measures to what extent it can generalize to rephrased knowledge inquiries; locality (Loc.) quantifies how much the model can retain its original output on irrelevant knowledge inquiries; portability (Por.) checks if the model is able to transfer new knowledge to related content. We report the average of different metrics<sup>1</sup> for more complete comparisons.
183
+
184
+ Implementation Details. Our experiments are conducted with EasyEdit (Wang et al., 2024e). More implementation details and hyper-parameters can be found in App C.
185
+
186
+ # 3.2 SINGLE EDITING PERFORMANCE
187
+
188
+ We evaluate the effectiveness of the proposed BaFT for conducting Single Editing on WikiData<sup>recent</sup>, WikiData<sup>counterfact</sup>, WikiBio, and ConvSent (only supports LLaMA family). The four benchmarks do not contain irrelevant data. Consequently, BaFT training does not involve the locality regularization.
189
+
190
+ Single Editing results are reported in Tab 1. The proposed BaFT performs highly competitively in all cases. BaFT and ReFT use a subspace of the same rank to edit representations, so an ideal BaFT should achieve reliability comparable to ReFT that can edit representations freely. Indeed, BaFT maintains a better editing-locality trade-off: it consistently achieves better locality and portability
191
+
192
+ than ReFT with no degradation of reliability. In comparison, other baselines suffer from notable editing-locality trade-off, i.e., achieve high reliability at a price of low locality. These methods also exhibit significant performance gaps when editing different LLMs. These results demonstrate BaFT as a new promising editing solution.
193
+
194
+ Table 1: Single Editing performance on four benchmark datasets. Results marked with “♥” are taken from Zhang et al. (2024b). Unsupported experiments are marked with “X”. Best Avg. results are in bold and second best are underlined.
195
+
196
+ <table><tr><td></td><td colspan="4">Wiki\(_{\text{recent}}\)</td><td colspan="4">Wiki\(_{\text{counterfact}}\)</td><td colspan="3">WikiBio</td><td>ConvSent</td></tr><tr><td></td><td colspan="12">LLaMA 2-7b-chat</td></tr><tr><td></td><td>Rel.</td><td>Por.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Por.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td></tr><tr><td>AdaLoRA\(\diamond\)</td><td>1.00</td><td>0.65</td><td>0.56</td><td>0.74</td><td>1.00</td><td>0.70</td><td>0.70</td><td>0.80</td><td>1.00</td><td>0.81</td><td>0.91</td><td>0.45</td></tr><tr><td>FT-L\(\diamond\)</td><td>0.56</td><td>0.41</td><td>0.44</td><td>0.47</td><td>0.45</td><td>0.34</td><td>0.50</td><td>0.51</td><td>0.66</td><td>0.80</td><td>0.73</td><td>0.50</td></tr><tr><td>ROME\(\diamond\)</td><td>0.97</td><td>0.55</td><td>0.55</td><td>0.69</td><td>0.99</td><td>0.56</td><td>0.52</td><td>0.69</td><td>0.96</td><td>0.63</td><td>0.80</td><td>0.46</td></tr><tr><td>MEMIT\(\diamond\)</td><td>0.97</td><td>0.56</td><td>0.52</td><td>0.68</td><td>0.98</td><td>0.59</td><td>0.47</td><td>0.68</td><td>0.94</td><td>0.62</td><td>0.78</td><td>0.45</td></tr><tr><td>ReFT</td><td>1.00</td><td>0.60</td><td>0.71</td><td>0.77</td><td>1.00</td><td>0.72</td><td>0.78</td><td>0.83</td><td>1.00</td><td>0.91</td><td>0.96</td><td>1.00</td></tr><tr><td>BaFT (Ours)</td><td>1.00</td><td>0.61</td><td>0.73</td><td>0.78</td><td>1.00</td><td>0.72</td><td>0.81</td><td>0.84</td><td>1.00</td><td>0.94</td><td>0.97</td><td>1.00</td></tr><tr><td></td><td colspan="12">LLaMA 3-8b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Por.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Por.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.61</td><td>0.45</td><td>0.69</td><td>1.00</td><td>0.74</td><td>0.51</td><td>0.75</td><td>1.00</td><td>0.79</td><td>0.90</td><td>1.00</td></tr><tr><td>FT-L</td><td>0.47</td><td>0.27</td><td>0.22</td><td>0.32</td><td>0.43</td><td>0.32</td><td>0.22</td><td>0.32</td><td>0.56</td><td>0.71</td><td>0.64</td><td>0.52</td></tr><tr><td>ROME</td><td>0.99</td><td>0.58</td><td>0.49</td><td>0.69</td><td>0.99</td><td>0.58</td><td>0.41</td><td>0.66</td><td>0.92</td><td>0.68</td><td>0.80</td><td>0.98</td></tr><tr><td>MEMIT</td><td>0.99</td><td>0.54</td><td>0.48</td><td>0.67</td><td>0.99</td><td>0.58</td><td>0.43</td><td>0.67</td><td>0.96</td><td>0.71</td><td>0.84</td><td>0.32</td></tr><tr><td>ReFT</td><td>1.00</td><td>0.62</td><td>0.62</td><td>0.75</td><td>1.00</td><td>0.72</td><td>0.74</td><td>0.82</td><td>1.00</td><td>0.87</td><td>0.94</td><td>0.98</td></tr><tr><td>BaFT (Ours)</td><td>1.00</td><td>0.62</td><td>0.64</td><td>0.75</td><td>1.00</td><td>0.72</td><td>0.75</td><td>0.82</td><td>1.00</td><td>0.91</td><td>0.96</td><td>0.96</td></tr><tr><td></td><td colspan="12">Gemma 1.1-7b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Por.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Por.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.58</td><td>0.28</td><td>0.62</td><td>1.00</td><td>0.70</td><td>0.35</td><td>0.68</td><td>1.00</td><td>0.70</td><td>0.85</td><td>x</td></tr><tr><td>FT-L</td><td>0.35</td><td>0.20</td><td>0.03</td><td>0.26</td><td>0.20</td><td>0.18</td><td>0.01</td><td>0.13</td><td>0.24</td><td>0.14</td><td>0.19</td><td>x</td></tr><tr><td>ROME</td><td>0.79</td><td>0.38</td><td>0.27</td><td>0.48</td><td>0.82</td><td>0.47</td><td>0.27</td><td>0.52</td><td>0.47</td><td>0.31</td><td>0.39</td><td>x</td></tr><tr><td>ReFT</td><td>1.00</td><td>0.54</td><td>0.55</td><td>0.70</td><td>1.00</td><td>0.63</td><td>0.72</td><td>0.78</td><td>1.00</td><td>0.82</td><td>0.91</td><td>x</td></tr><tr><td>BaFT (Ours)</td><td>1.00</td><td>0.54</td><td>0.58</td><td>0.71</td><td>1.00</td><td>0.62</td><td>0.77</td><td>0.80</td><td>1.00</td><td>0.85</td><td>0.93</td><td>x</td></tr></table>
197
+
198
+ # 3.3 CONTINUAL AND BATCHED EDITING PERFORMANCE.
199
+
200
+ Next, we study the two challenging scenarios, where massive editorings are conducted in a sequential (continual) or batched way. We follow Wang et al. (2024d) and experiment with LLaMA 2 (non-chat version), LLaMA 3, and Gemma on ZsRE. We note that the state-of-the-art continual editing method WISE contains substantially larger parameter size and is much more computationally expensive. For fair comparison, we include $\mathrm{WISE}_{\mathrm{light}}$ , a lightweight version of WISE that contains 1/8 learnable parameters of the original WISE to make its training affordable. We want to highlight that $\mathrm{WISE}_{\mathrm{light}}$ does not change editing mechanism², and still contains more learnable parameters than BaFT and ReFT (10 and 20 times respectively). Learnable parameters used in different methods, along with their time consumptions, are reported in Tab 3.
201
+
202
+ Continual Editing Performance. Tab 2 presents the main results of continually editing 1000 pieces of ZsRE knowledge. BaFT again achieves remarkable editing performance while maintaining excellent locality on LLMs from different families, reaching the best two in nearly all scenarios. In comparison, standard methods AdaLoRA, FT-L, ROME, and MEMIT encounter considerable performance gaps over different LLMs. Meanwhile, they fall short in editing multiple pieces of knowledge that emerge sequentially. WISE performs slightly better but its parameter efficiency is much lower, as we will show soon. GRACE is designed for continual editing but still suffers from failure on editing Gemma. These methods might benefit from a more extensive hyper-parameter tuning for each
203
+
204
+ ![](images/3c26b1bb05c3692a5b6068733fc6e0977fad6701c5766228c167806a8c533e3b.jpg)
205
+ Figure 2: Bases weights used for editing and irrelevant knowledge (averaged over different positions).
206
+
207
+ LLM. Nonetheless, their prolonged running time makes this process expensive, if not unaffordable.
208
+
209
+ When comparing BaFT and ReFT with each other, we note that as in Single Editing, BaFT maintains, if not surpasses, the editing ability of ReFT. In addition, when the editing number $T$ increases, BaFT shows excellent robustness against forgetting, as indicated by its capability of preserving high locality in all scenarios. We further visualize bases weights in Fig 2, where a one-layer BaFT is used to edit LLaMA 2 on 100 ZsRE knowledge with $T = 10$ (achieved reliability, generality, and locality are 0.75, 0.71, and 0.98 respectively). Rel., Gen., and Loc. refers to new, rephrased, and unrelated knowledge, respectively. We note that BaFT evenly distributes the editing over all bases, and unrelated knowledge receives significantly lower weights. These results confirm that BaFT leverages the fine-grained basis-level control as designed in Sec 2, thereby excelling at Continual Editing.
210
+
211
+ Table 2: Continual Editing performance on ZsRE dataset, evaluated after conducting $T$ times of editing sequentially. Results marked with "♥" are taken from Wang et al. (2024d). Best Avg. results are in bold and second best are underlined.
212
+
213
+ <table><tr><td></td><td colspan="4">T=1</td><td colspan="4">T=10</td><td colspan="4">T=100</td><td colspan="4">T=1000</td></tr><tr><td></td><td colspan="16">LLaMA 2-7b</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.90</td><td>0.92</td><td>0.94</td><td>0.39</td><td>0.38</td><td>0.50</td><td>0.42</td><td>0.06</td><td>0.06</td><td>0.06</td><td>0.06</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>FT-L</td><td>0.57</td><td>0.53</td><td>0.96</td><td>0.69</td><td>0.35</td><td>0.31</td><td>0.12</td><td>0.26</td><td>0.29</td><td>0.26</td><td>0.09</td><td>0.21</td><td>0.24</td><td>0.20</td><td>0.25</td><td>0.23</td></tr><tr><td>ROME</td><td>0.96</td><td>0.91</td><td>0.98</td><td>0.95</td><td>0.80</td><td>0.76</td><td>0.77</td><td>0.78</td><td>0.18</td><td>0.18</td><td>0.07</td><td>0.12</td><td>0.00</td><td>0.01</td><td>0.00</td><td>0.00</td></tr><tr><td>MEMIT</td><td>0.95</td><td>0.90</td><td>0.99</td><td>0.95</td><td>0.77</td><td>0.74</td><td>0.90</td><td>0.80</td><td>0.25</td><td>0.24</td><td>0.19</td><td>0.02</td><td>0.04</td><td>0.04</td><td>0.02</td><td>0.03</td></tr><tr><td>MELO</td><td>1.00</td><td>0.40</td><td>0.99</td><td>0.80</td><td>0.95</td><td>0.40</td><td>0.99</td><td>0.78</td><td>0.61</td><td>0.40</td><td>0.99</td><td>0.67</td><td>0.40</td><td>0.40</td><td>0.99</td><td>0.60</td></tr><tr><td>GRACE</td><td>0.98</td><td>0.08</td><td>1.00</td><td>0.69</td><td>0.96</td><td>0.00</td><td>1.00</td><td>0.65</td><td>0.96</td><td>0.00</td><td>1.00</td><td>0.65</td><td>0.97</td><td>0.08</td><td>1.00</td><td>0.68</td></tr><tr><td>WISEfull</td><td>0.98</td><td>0.92</td><td>1.00</td><td>0.97</td><td>0.94</td><td>0.88</td><td>1.00</td><td>0.94</td><td>0.90</td><td>0.81</td><td>1.00</td><td>0.90</td><td>0.77</td><td>0.72</td><td>1.00</td><td>0.83</td></tr><tr><td>WISElight</td><td>0.95</td><td>0.83</td><td>1.00</td><td>0.93</td><td>0.93</td><td>0.74</td><td>1.00</td><td>0.89</td><td>0.83</td><td>0.73</td><td>0.99</td><td>0.85</td><td>0.49</td><td>0.47</td><td>1.00</td><td>0.65</td></tr><tr><td>ReFT</td><td>1.00</td><td>0.95</td><td>0.94</td><td>0.96</td><td>0.90</td><td>0.85</td><td>0.88</td><td>0.87</td><td>0.78</td><td>0.74</td><td>0.83</td><td>0.78</td><td>0.58</td><td>0.56</td><td>0.73</td><td>0.62</td></tr><tr><td>BaFT (Ours)</td><td>1.00</td><td>0.94</td><td>0.97</td><td>0.97</td><td>0.89</td><td>0.84</td><td>0.97</td><td>0.90</td><td>0.75</td><td>0.70</td><td>0.98</td><td>0.81</td><td>0.63</td><td>0.60</td><td>0.98</td><td>0.74</td></tr><tr><td></td><td colspan="16">LLaMA 3-8b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.99</td><td>0.85</td><td>0.95</td><td>0.27</td><td>0.26</td><td>0.26</td><td>0.26</td><td>0.03</td><td>0.03</td><td>0.01</td><td>0.02</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>FT-L</td><td>0.51</td><td>0.52</td><td>0.68</td><td>0.57</td><td>0.25</td><td>0.20</td><td>0.03</td><td>0.16</td><td>0.19</td><td>0.16</td><td>0.02</td><td>0.12</td><td>0.16</td><td>0.14</td><td>0.01</td><td>0.10</td></tr><tr><td>ROME</td><td>0.99</td><td>0.96</td><td>0.96</td><td>0.97</td><td>0.62</td><td>0.63</td><td>0.42</td><td>0.56</td><td>0.07</td><td>0.07</td><td>0.01</td><td>0.05</td><td>0.03</td><td>0.03</td><td>0.00</td><td>0.02</td></tr><tr><td>MEMIT</td><td>0.99</td><td>0.96</td><td>0.98</td><td>0.98</td><td>0.68</td><td>0.66</td><td>0.71</td><td>0.68</td><td>0.03</td><td>0.03</td><td>0.02</td><td>0.03</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>MELO</td><td>1.00</td><td>0.29</td><td>1.00</td><td>0.76</td><td>0.97</td><td>0.30</td><td>1.00</td><td>0.76</td><td>0.55</td><td>0.31</td><td>0.99</td><td>0.62</td><td>0.31</td><td>0.30</td><td>0.99</td><td>0.53</td></tr><tr><td>GRACE</td><td>0.33</td><td>0.00</td><td>0.54</td><td>0.29</td><td>0.33</td><td>0.02</td><td>0.56</td><td>0.30</td><td>0.33</td><td>0.02</td><td>0.57</td><td>0.31</td><td>0.33</td><td>0.02</td><td>0.55</td><td>0.30</td></tr><tr><td>WISElight</td><td>0.95</td><td>0.91</td><td>0.99</td><td>0.95</td><td>0.82</td><td>0.76</td><td>1.00</td><td>0.86</td><td>0.63</td><td>0.57</td><td>1.00</td><td>0.73</td><td>0.39</td><td>0.37</td><td>1.00</td><td>0.59</td></tr><tr><td>ReFT</td><td>1.00</td><td>0.97</td><td>0.93</td><td>0.97</td><td>0.90</td><td>0.84</td><td>0.87</td><td>0.87</td><td>0.68</td><td>0.61</td><td>0.74</td><td>0.68</td><td>0.48</td><td>0.45</td><td>0.64</td><td>0.52</td></tr><tr><td>BaFT (Ours)</td><td>1.00</td><td>0.95</td><td>0.96</td><td>0.97</td><td>0.89</td><td>0.82</td><td>0.95</td><td>0.89</td><td>0.70</td><td>0.64</td><td>0.93</td><td>0.76</td><td>0.50</td><td>0.49</td><td>0.93</td><td>0.64</td></tr><tr><td></td><td colspan="16">Gemma 1.1-7b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.97</td><td>0.67</td><td>0.88</td><td>0.19</td><td>0.20</td><td>0.18</td><td>0.19</td><td>0.03</td><td>0.03</td><td>0.01</td><td>0.02</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>FT-L</td><td>0.28</td><td>0.33</td><td>0.09</td><td>0.23</td><td>0.14</td><td>0.06</td><td>0.00</td><td>0.07</td><td>0.07</td><td>0.04</td><td>0.00</td><td>0.04</td><td>0.05</td><td>0.04</td><td>0.00</td><td>0.03</td></tr><tr><td>ROME</td><td>0.75</td><td>0.71</td><td>0.88</td><td>0.78</td><td>0.18</td><td>0.18</td><td>0.05</td><td>0.14</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>MEMO</td><td>1.00</td><td>0.20</td><td>1.00</td><td>0.73</td><td>0.96</td><td>0.23</td><td>1.00</td><td>0.73</td><td>0.52</td><td>0.26</td><td>0.95</td><td>0.58</td><td>0.26</td><td>0.25</td><td>0.95</td><td>0.49</td></tr><tr><td>GRACE</td><td>0.39</td><td>0.00</td><td>1.00</td><td>0.46</td><td>0.39</td><td>0.01</td><td>1.00</td><td>0.47</td><td>0.39</td><td>0.01</td><td>1.00</td><td>0.47</td><td>0.39</td><td>0.01</td><td>1.00</td><td>0.47</td></tr><tr><td>WISElight</td><td>0.99</td><td>0.96</td><td>1.00</td><td>0.98</td><td>0.90</td><td>0.84</td><td>0.99</td><td>0.91</td><td>0.79</td><td>0.71</td><td>0.95</td><td>0.82</td><td>0.48</td><td>0.42</td><td>0.98</td><td>0.63</td></tr><tr><td>ReFT</td><td>1.00</td><td>0.86</td><td>0.91</td><td>0.92</td><td>0.92</td><td>0.81</td><td>0.81</td><td>0.85</td><td>0.66</td><td>0.58</td><td>0.69</td><td>0.64</td><td>0.50</td><td>0.46</td><td>0.65</td><td>0.54</td></tr><tr><td>BaFT (Ours)</td><td>1.00</td><td>0.84</td><td>0.94</td><td>0.93</td><td>0.92</td><td>0.80</td><td>0.92</td><td>0.88</td><td>0.70</td><td>0.62</td><td>0.92</td><td>0.75</td><td>0.48</td><td>0.45</td><td>0.92</td><td>0.62</td></tr></table>
214
+
215
+ Table 3: Parameter size and editing time with an NVIDIA V100 32-GB GPU (averaged over 100 samples). ROME, MEMIT, and GRACE do not contain pre-specified learnable parameters.
216
+
217
+ <table><tr><td rowspan="2"></td><td colspan="2">LLaMA 2-7b(-chat)</td><td colspan="2">LLaMA 3-8b-Instruct</td><td colspan="2">Gemma 1.1-7b-Instruct</td></tr><tr><td># Params.</td><td>Time (sec./edit)</td><td># Params.</td><td>Time (sec./edit)</td><td># Params.</td><td>Time (sec./edit)</td></tr><tr><td>AdaLoRA</td><td>6,292,224</td><td>26.24</td><td>5,112,576</td><td>28.71</td><td>4,817,568</td><td>44.24</td></tr><tr><td>FT-L</td><td>45,088,768</td><td>9.73</td><td>58,720,256</td><td>10.84</td><td>75,497,472</td><td>11.95</td></tr><tr><td>ROME</td><td>/</td><td>27.27</td><td>/</td><td>25.01</td><td>/</td><td>52.07</td></tr><tr><td>MEMIT</td><td>/</td><td>20.01</td><td>/</td><td>25.35</td><td>/</td><td>/</td></tr><tr><td>GRACE</td><td>/</td><td>34.38</td><td>/</td><td>87.08</td><td>/</td><td>43.45</td></tr><tr><td>WISElight</td><td>5,636,096</td><td>58.00</td><td>7,340,032</td><td>65.77</td><td>9,437,184</td><td>20.20</td></tr><tr><td>ReFT</td><td>393,264</td><td>10.99</td><td>393,264</td><td>9.33</td><td>294,960</td><td>7.79</td></tr><tr><td>BaFT (Ours)</td><td>606,256</td><td>13.46</td><td>606,256</td><td>12.69</td><td>454,704</td><td>10.13</td></tr></table>
218
+
219
+ Batched Editing Performance. We further compare BaFT and ReFT against baselines that admit batched data for editing, namely, AdaLoRA, FT-L, and MEMIT. LMs were edited on ZsRE dataset, and batch sizes were set to 10 and 50 respectively.
220
+
221
+ We visualize the average of reliability, generality, and locality in Fig 3, and defer the complete results to App D. The proposed BaFT again achieved a great balance between good edit success and high locality, outperforming all baselines in 5 out of 6 scenarios. Surprisingly, when $T = 10$ , LoRA
222
+
223
+ and MEMIT were capable of benefiting from editing multiple samples in a batch than one by one. We conjecture that learning multiple pieces of knowledge in a batch helps mitigate their overfitting on any single knowledge, thereby weakened the forgetting problem to some extent. This finding suggests that caching more knowledge and editing them in a batch can be beneficial in some cases.
224
+
225
+ Parameter Efficiency. Continual and Batched Editing involve learning more knowledge than in Single Editing. As a result, achieving good editing performance while maintaining high parameter efficiency is non-trivial, as using fewer parameters increases the workload of each parameter to learn more knowledge. We note that while $\mathrm{WISE}_{\mathrm{light}}$ achieved comparable performance to BaFT, its parameter efficiency was much lower: on LLaMA 2-7b, the edit success dropped from 0.77 (of WISE) to 0.49 when editing 1000 pieces of knowledge, around $22\%$ lower than BaFT which uses 10 times less parameters, as per Table 3. Similar trends can be found when making comparison with LoRA in Batched Editing scenarios. In conclusion, BaFT is capable of achieving much better parameter efficiency than existing methods.
226
+
227
+ ![](images/95e2c2cb27d646991e36d37c838333066bbb8ae60bdbb77e45db7d0d0d2a1040.jpg)
228
+ (a) LLaMA 2-7b
229
+
230
+ ![](images/e938961bba67db74aafc1b48f9a0e4716619a9972192ed3db89fe2ea4f17af90.jpg)
231
+ (b) LLaMA 3-8b-Instruct
232
+
233
+ ![](images/655c28d4fb5a0ff04a2231d6c7ed7922192b1b80ef3a1f8d08ffc332831c23ad.jpg)
234
+ (c) Gemma 1.1-7b-Instruct
235
+
236
+ ![](images/7d9e8972a3be721c8bf3886b127751870c0dba9ec5ac093c3dbdfce1285ac07a.jpg)
237
+ (d) LLaMA 2-7b
238
+
239
+ ![](images/e8d908b3f04e76e7e23c729911296ba7efac7eb2e60ee7b14d85131699060bc6.jpg)
240
+ (e) LLaMA 3-8b-Instruct
241
+
242
+ ![](images/bdd6b42e6d1d77eea01d71da890ae52f2c4db1cdf6d77d5b4907281d7bc3de39.jpg)
243
+ (f) Gemma 1.1-7b-Instruct
244
+ Figure 3: Batched Editing Performance under sequence length. The first row uses batch size 10 and the second row uses batch size 50.
245
+
246
+ # 3.4 ABLATION STUDY
247
+
248
+ We end this section with an ablation study on BaFT to showcase how each component contributes to the final performance. Results from continually editing LLaMA 2-7b with 100 ZsRE knowledge are presented in Tab 4. We note that introducing a coarse-grained subspace-level weighting $(ss - w)$ which assigns all bases with the same weight along did not benefit ReFT. Moreover, both locality regularization $(lr)$ and fine-grained basis-level weighting $(ba - w)$ helped improve locality. Remarkably, the basis-level weighting, as observed in all Single Editing scenarios, did not lead to edit performance degradation. Locality regularization, while greatly improved the locality, induced a trade-off with editing performance at the same time. Notably, the degradation is amplified when the subspace-level weighting was used, echoing well with our theoretical analysis.
249
+
250
+ In conclusion, the proposed BaFT makes two improvements over ReFT. First, the weighting offers a fine-grained level learning, leading to better locality without hurting editing performance. Second, a fine-grained basis-level control allows one to regularize locality by altering only the important parts, leading to a better empirical editing-locality trade-off.
251
+
252
+ # 4 RELATED WORKS
253
+
254
+ Existing editing methods mainly fall into two classes.
255
+
256
+ Internal Storage updates model parameters for the adaptation. Early efforts involved fine-tuning a LLM directly but suffered from severe forgetting of original knowledge (Wang et al., 2023). For more precise editing, Zhu et al. (2020) imposed a relaxed $\ell_{2}$ norm constraint on the parameter updates, and Huang et al. (2023); Dong et al. (2022a) limited the updates to some specific feed-forward network (FFN) layers, based on findings that knowledge is often stored therein (Dai et al., 2021). For further refinement, the locate-and-edit
257
+
258
+ Table 4: Component effects in BaFT.
259
+
260
+ <table><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>ReFT</td><td>0.76</td><td>0.71</td><td>0.84</td><td>0.77</td></tr><tr><td>+ss-w.</td><td>0.74</td><td>0.68</td><td>0.81</td><td>0.74</td></tr><tr><td>+ba-w</td><td>0.77</td><td>0.71</td><td>0.86</td><td>0.78</td></tr><tr><td>+ss-w&amp;Ir</td><td>0.67</td><td>0.61</td><td>0.99</td><td>0.76</td></tr><tr><td>BaFT</td><td>0.73</td><td>0.67</td><td>0.98</td><td>0.79</td></tr></table>
261
+
262
+ paradigm (Meng et al., 2022a;b) first identifies the layer storing a specific knowledge, and then modifies its parameters in an analytic form or via least squared solution. On the other hand, PEFT methods such as AdaLoRA (Zhang et al., 2023) also provided performance on par with locating-based solutions (Wu et al., 2023; Wang et al., 2024b). However, these methods are parameter-based and offer a similar level of control, in the sense that all inputs are altered in the same way. As a result, they suffer from an equal level of editing-locality trade-off (Wang et al., 2023; 2024d). These findings raised a question as to what extent knowledge can be accurately attributed to some specific parameters (Hase et al., 2024). Inspired by the recent advance of improving a LLM's general ability such as commonsense reasoning by fine-tuning its representations (Wu et al., 2024), in this work we show that updating representations at only a few locations can provide strong editing performance. By pushing the fine-tuning towards a new basis-level, our BaFT achieved better fine-grained control and superior editing-locality trade-off.
263
+
264
+ External Storage resorts to external memories without changing original parameters. Methods include meta-learning based MEND (Mitchell et al., 2021) and its multi-task version InstructEdit (Zhang et al., 2024a), IKE (Zheng et al., 2023) and LTE (Jiang et al., 2024) that bear the similarity to Retrieval-Augmented Generation (Gao et al., 2023; Wang et al., 2024a; Xu et al., 2024; Yu et al., 2025; Liu et al., 2025b), augmentation based StableKE (Wei et al., 2024), and proxy model based SERAC (Mitchell et al., 2022). Notwithstanding, these methods need large-scaled hard-to-access data to retrieve from (e.g., IKE, LTE), or to train extra model on (e.g., MEND, InstructEdit, SERAC). As a consequence, they have limited practicality and fall short on Continual Editing that requires frequent updates (Wang et al., 2024d). Recently, methods specialized for Continual Editing were proposed (Hartvigsen et al., 2024; Yu et al., 2024; Wang et al., 2024d). These approaches injected lightweight adapters (Hartvigsen et al., 2024) or weight copies (Wang et al., 2024d) to memorize new knowledge, and learned some gating mechanism to determine whether original or new knowledge to use. Specifically, GRACE (Hartvigsen et al., 2024) maintained a code-book to determine which adapter will be used based on representation similarity, and MELO (Yu et al., 2024) used dynamic LoRA. WISE (Wang et al., 2024d) learned activation threshold to trigger new learned weights. However, these methods have several limitations. First, they often show unsatisfactory generalizability, as observed in Wang et al. (2024d) and confirmed in our experiments. Second, they require prolonged training (and inference) time, due to the need of maintaining non-constant numbers of external memories. Finally, existing gating mechanisms cannot be learned when multiple pieces of knowledge appear, making them incompatible for Batched Editing. In comparison, BaFT learns a pre-specified set of parameters and lets bases weights play the role of gating. This design makes BaFT suitable for both Continual and Batched Editing. Moreover, as editing and activation are conducted in a representation subspace, BaFT is able to achieve good generalizability at better parameter efficiency.
265
+
266
+ # 5 CONCLUSION AND FUTURE WORKS
267
+
268
+ In this work, we propose a new representation based method towards more efficient knowledge editing. Grounded in a theoretical analysis, we show that updating all selected representations with one linear subspace in a predetermined manner imposes a tension in editing-locality trade-off. Subsequently, BaFT as a better solution is proposed. Given a representation, BaFT first computes a weight for each basis that spans the linear subspace, then conducts a linear update along this basis direction. Because bases weights are determined from the current representation with non-linear functions, BaFT fine-tunes the representation in a non-linear way. This fine-grained control leads to better performance on editing three representative LLMs in various scenarios, on par with or outperforming the strongest baselines at much better parameter efficiency. As detailed in App A, there are some limitations in this work, and we plan to work on in our future work.
269
+
270
+ # ACKNOWLEDGEMENT
271
+
272
+ This work is supported in part by the US National Science Foundation under grant NSF IIS-2141037. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
273
+
274
+ # REFERENCES
275
+
276
+ Genrich Belitskii et al. Matrix norms and their applications, volume 36. Birkhäuser, 2013.
277
+ Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. A neural probabilistic language model. Advances in neural information processing systems, 13, 2000.
278
+ Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432-7439, 2020.
279
+ Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
280
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
281
+ Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuzhhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
282
+ Wei Chen, Zichen Miao, and Qiang Qiu. Inner product-based neural network similarity. Advances in Neural Information Processing Systems, 36:73995-74020, 2023.
283
+ Wei Chen, Zichen Miao, and Qiang Qiu. Parameter-efficient tuning of large convolutional models. arXiv preprint arXiv:2403.00269, 2024.
284
+ Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. Evaluating the ripple effects of knowledge editing in language models. Transactions of the Association for Computational Linguistics, 12:283-298, 2024.
285
+ Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. Knowledge neurons in pretrained transformers. arXiv preprint arXiv:2104.08696, 2021.
286
+ Nicola De Cao, Wilker Aziz, and Ivan Titov. Editing factual knowledge in language models. arXiv preprint arXiv:2104.08164, 2021.
287
+ Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 5(3):220-235, 2023.
288
+ Qingxiu Dong, Damai Dai, Yifan Song, Jingjing Xu, Zhifang Sui, and Lei Li. Calibrating factual knowledge in pretrained language models. arXiv preprint arXiv:2210.03329, 2022a.
289
+ Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. A survey for in-context learning. arXiv preprint arXiv:2301.00234, 2022b.
290
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models, 2024.
291
+ William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity, 2022. URL https://arxiv.org/abs/2101.03961.
292
+
293
+ Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Haofen Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997, 2, 2023.
294
+ Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509, 2022.
295
+ Tom Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, and Marzyeh Ghassemi. Aging with grace: Lifelong model editing with discrete key-value adaptors. Advances in Neural Information Processing Systems, 36, 2024.
296
+ Peter Hase, Mohit Bansal, Been Kim, and Asma Ghandeharioun. Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models. Advances in Neural Information Processing Systems, 36, 2024.
297
+ Evan Hernandez, Belinda Z Li, and Jacob Andreas. Inspecting and editing knowledge representations in language models. arXiv preprint arXiv:2304.00740, 2023.
298
+ Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pp. 2790-2799, 2019.
299
+ Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021.
300
+ Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, and Zhang Xiong. Transformer-patcher: One mistake worth one neuron. arXiv preprint arXiv:2301.09785, 2023.
301
+ Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1-38, 2023.
302
+ Yuxin Jiang, Yufei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, et al. Learning to edit: Aligning llms with knowledge editing. arXiv preprint arXiv:2402.11905, 2024.
303
+ Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199-22213, 2022.
304
+ Alex Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron Courville, and Yoshua Bengio. Professor forcing: A new algorithm for training recurrent networks, 2016.
305
+ Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
306
+ Tianci Liu, Zihan Dong, Linjun Zhang, Haoyu Wang, and Jing Gao. Mitigating heterogeneous token overfitting in lIm knowledge editing. arXiv preprint arXiv:2502.00602, 2025a.
307
+ Tianci Liu, Haoxiang Jiang, Tianze Wang, Ran Xu, Yue Yu, Linjun Zhang, Tuo Zhao, and Haoyu Wang. Roserag: Robust retrieval-augmented generation with small-scale llms via margin-aware preference optimization. arXiv preprint arXiv:2502.10993, 2025b.
308
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2019.
309
+ Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359-17372, 2022a.
310
+ Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229, 2022b.
311
+
312
+ Zichen Miao, Ze Wang, Wei Chen, and Qiang Qiu. Continual learning with filter atom swapping. In International Conference on Learning Representations, 2021.
313
+ Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. Fast model editing at scale. arXiv preprint arXiv:2110.11309, 2021.
314
+ Eric Mitchell, Charles Lin, Antoine Bosselut, Christopher D Manning, and Chelsea Finn. Memory-based model editing at scale. In International Conference on Machine Learning, pp. 15817-15831, 2022.
315
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35: 27730-27744, 2022.
316
+ Kiho Park, Yo Joong Choe, and Victor Veitch. The linear representation hypothesis and the geometry of large language models. arXiv preprint arXiv:2311.03658, 2023.
317
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
318
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551, 2020.
319
+ Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. Do massively pretrained language models make better storytellers? arXiv preprint arXiv:1909.10705, 2019.
320
+ Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer, 2017.
321
+ Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, et al. Gemma: Open models based on gemini research and technology, 2024.
322
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
323
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
324
+ Haoyu Wang, Ruirui Li, Haoming Jiang, Jinjin Tian, Zhengyang Wang, Chen Luo, Xianfeng Tang, Monica Cheng, Tuo Zhao, and Jing Gao. Blendfilter: Advancing retrieval-augmented large language models via query generation blending and knowledge filtering. arXiv preprint arXiv:2402.11129, 2024a.
325
+ Haoyu Wang, Tianci Liu, Ruirui Li, Monica Cheng, Tuo Zhao, and Jing Gao. Roselora: Row and column-wise sparse low-rank adaptation of pre-trained language model for knowledge editing and fine-tuning. arXiv preprint arXiv:2406.10777, 2024b.
326
+ Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024c.
327
+ Peng Wang, Zexi Li, Ningyu Zhang, Ziwen Xu, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, and Huajun Chen. Wise: Rethinking the knowledge memory for lifelong model editing of large language models. arXiv preprint arXiv:2405.14768, 2024d.
328
+
329
+ Peng Wang, Ningyu Zhang, Bozhong Tian, Zekun Xi, Yunzhi Yao, Ziwen Xu, Mengru Wang, Shengyu Mao, Xiaohan Wang, Siyuan Cheng, Kangwei Liu, Yuansheng Ni, Guozhou Zheng, and Huajun Chen. Easyedit: An easy-to-use knowledge editing framework for large language models, 2024e.
330
+ Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, et al. Knowledge editing for large language models: A survey. arXiv preprint arXiv:2310.16218, 2023.
331
+ Zihao Wei, Liang Pang, Hanxing Ding, Jingcheng Deng, Huawei Shen, and Xueqi Cheng. Stable knowledge editing in large language models. arXiv preprint arXiv:2402.13048, 2024.
332
+ Suhang Wu, Minlong Peng, Yue Chen, Jinsong Su, and Mingming Sun. Eva-kellm: A new benchmark for evaluating knowledge editing of llms. arXiv preprint arXiv:2308.09954, 2023.
333
+ Zhengxuan Wu, Aryaman Arora, Zheng Wang, Atticus Geiger, Dan Jurafsky, Christopher D Manning, and Christopher Potts. Reft: Representation finetuning for language models. arXiv preprint arXiv:2404.03592, 2024.
334
+ Ran Xu, Hui Liu, Sreyashi Nag, Zhenwei Dai, Yaochen Xie, Xianfeng Tang, Chen Luo, Yang Li, Joyce C Ho, Carl Yang, et al. Simrag: Self-improving retrieval-augmented generation for adapting large language models to specialized domains. arXiv preprint arXiv:2410.17952, 2024.
335
+ Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, and Ningyu Zhang. Editing large language models: Problems, methods, and opportunities. arXiv preprint arXiv:2305.13172, 2023.
336
+ Lang Yu, Qin Chen, Jie Zhou, and Liang He. Melo: Enhancing model editing with neuron-indexed dynamic lora. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 19449-19457, 2024.
337
+ Yue Yu, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Mohammad Shoeybi, and Bryan Catanzaro. Rankrag: Unifying context ranking with retrieval-augmented generation in llms. Advances in Neural Information Processing Systems, 37:121156-121184, 2025.
338
+ Ningyu Zhang, Bozhong Tian, Siyuan Cheng, Xiaozhuan Liang, Yi Hu, Kouying Xue, Yanjie Gou, Xi Chen, and Huajun Chen. Instructedit: Instruction-based knowledge editing for large language models. arXiv preprint arXiv:2402.16123, 2024a.
339
+ Ningyu Zhang, Yunzhi Yao, Bozhong Tian, Peng Wang, Shumin Deng, Mengru Wang, Zekun Xi, Shengyu Mao, Jintian Zhang, Yuansheng Ni, et al. A comprehensive study of knowledge editing for large language models. arXiv preprint arXiv:2401.01286, 2024b.
340
+ Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. Adaptive budget allocation for parameter-efficient fine-tuning. In International Conference on Learning Representations, 2023.
341
+ Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, and Baobao Chang. Can we edit factual knowledge by in-context learning? arXiv preprint arXiv:2305.12740, 2023.
342
+ Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, et al. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419, 2023.
343
+ Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, and Sanjiv Kumar. Modifying memories in transformer models. arXiv preprint arXiv:2012.00363, 2020.
344
+
345
+ # A MORE DISCUSSIONS AND LIMITATIONS ON BAFT
346
+
347
+ In this section we provide more discussions on the proposed BaFT. Fig 4 demonstrates the workflow of our method. There are also some limitations in this work, and we plan to explore in the future.
348
+
349
+ ![](images/7f33bc718344280b0a6c74ecc16ce90bbd2add2ebb803d5c50e90ecc3a65b38b.jpg)
350
+ Figure 4: BaFT learns basis-level weights to edit different representations (highlighted in different colors). When using constant weights, BaFT reduces to ReFT.
351
+
352
+ First, The empirical success of BaFT was mainly established on standard benchmarks EasyEdit (Wang et al., 2024e), which may not be sufficient to reflect the diverse real-world applications. Second, BaFT as a generalization of ReFT requires hyper-parameter tuning to determine proper positions and layers to add interventions. Our choice was selected based on recommended values from ReFT (Wu et al., 2024). We plan to explore automating this process by imposing proper sparsity constraints on weights in our future work. Third, the promising performance of BaFT demonstrates its potential for efficient knowledge editing. However, it is still an open question if representation-based method is capable of fitting any editing (or updates) learnable for parameter-based methods. In other words, it is unknown if there is some knowledge that can be learned by parameter-based method but is unlearnable by updating representations. We plan to explore this direction in our future work.
353
+
354
+ # B OMITTED PROOF
355
+
356
+ We include omitted proof here.
357
+
358
+ # B.1 PROOF OF THM 2.3
359
+
360
+ We start with restating the two assumptions and the theorem.
361
+
362
+ Assumption B.1. Let text $\mathbf{x}$ encodes $s, r$ , text $\mathbf{y}$ generated by the LM will convey $o$ if its intermediate representation takes some targeted value $t$ .
363
+
364
+ Assumption B.2. For any $\pmb{h}$ carrying some high-level knowledge, there exists a positive $\varepsilon(h)$ -radius $\ell_2$ ball $B(\pmb{h}, \varepsilon(\pmb{h}))$ around $\pmb{h}$ such that any $\pmb{h}' \in B(\pmb{h}, \varepsilon(\pmb{h}))$ conveys the same knowledge, we refer to $B(\pmb{h}, \varepsilon(\pmb{h}))$ as a stable-ball of $\pmb{h}$ .
365
+
366
+ Theorem B.3. When finetuning a LM, ReFT learns to update the old representation $\pmb{h}_0$ to targeted $\pmb{t} = \Phi(\pmb{h}_0)$ . If ReFT maintains good generality such that $\forall \pmb{h} \in B(\pmb{h}_0, \varepsilon(\pmb{h}_0))$ ,
367
+
368
+ $$
369
+ \left\| \Phi (\boldsymbol {h}) - \Phi (\boldsymbol {h} _ {0}) \right\| = \left\| \Phi (\boldsymbol {h}) - \boldsymbol {t} \right\| < \varepsilon (\boldsymbol {t}),
370
+ $$
371
+
372
+ where $\| \cdot \|$ denote the $\ell_2$ norm. Then for any irrelevant input $h_{ir}$ with a small stable-ball radius
373
+
374
+ $$
375
+ \varepsilon \left(\boldsymbol {h} _ {i r}\right) < \frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {t}) + \varepsilon \left(\boldsymbol {h} _ {0}\right))}{\varepsilon (\boldsymbol {t}) + 2 \varepsilon \left(\boldsymbol {h} _ {0}\right)} \varepsilon \left(\boldsymbol {h} _ {0}\right),
376
+ $$
377
+
378
+ and is not too far from $h_0$ such that
379
+
380
+ $$
381
+ \left\| \boldsymbol {h} _ {i r} - \boldsymbol {h} _ {0} \right\| = \varepsilon \left(\boldsymbol {h} _ {i r}\right) + \varepsilon \left(\boldsymbol {h} _ {0}\right),
382
+ $$
383
+
384
+ ReFT will output $\Phi (\pmb{h}_{ir})\notin B(\pmb{h}_{ir},\varepsilon (\pmb{h}_{ir}))$ and break its locality guarantee.
385
+
386
+ Proof. First, since old and new knowledge associates with different objects $o$ , by Asmp 2.2, $h_0$ and $t$ must have non-overlapped stable-ball. Otherwise, we can find
387
+
388
+ $$
389
+ \boldsymbol {h} \in B (\boldsymbol {h} _ {0}, \varepsilon (\boldsymbol {h} _ {0})) \cap B (\boldsymbol {t}, \varepsilon (\boldsymbol {t})),
390
+ $$
391
+
392
+ that preserves the knowledge of both $h_0$ and $t$ that are different, which is impossible. This implies
393
+
394
+ $$
395
+ \left\| \boldsymbol {t} - \boldsymbol {h} _ {0} \right\| \geq \varepsilon (\boldsymbol {t}) + \varepsilon (\boldsymbol {h} _ {0}).
396
+ $$
397
+
398
+ In addition, by definition of ReFT, we have
399
+
400
+ $$
401
+ \begin{array}{l} \boldsymbol {t} - \boldsymbol {h} _ {0} = \Phi (\boldsymbol {h} _ {0}) - \boldsymbol {h} _ {0} \\ = \boldsymbol {h} _ {0} + \mathbf {R} ^ {\top} (\mathbf {A} \boldsymbol {h} _ {0} + \boldsymbol {b}) - \boldsymbol {h} _ {0} \\ = \mathbf {R} ^ {\top} (\mathbf {A} - \mathbf {R}) h _ {0} + \mathbf {R} ^ {\top} \boldsymbol {b} \\ \stackrel {(a)} {=} \mathbf {H} \boldsymbol {h} _ {0} + \mathbf {R} ^ {\top} \boldsymbol {b}, \\ \end{array}
402
+ $$
403
+
404
+ where in the last step $(a)$ , we defined $\mathbf{H} \triangleq \mathbf{R}^\top (\mathbf{A} - \mathbf{R})$ for simplicity.
405
+
406
+ Next, according to the generality condition, for any $\pmb{h} \in B(\pmb{h}_0, \varepsilon(\pmb{h}_0))$ , we have
407
+
408
+ $$
409
+ \begin{array}{l} \left\| \Phi (\boldsymbol {h}) - \Phi (\boldsymbol {h} _ {0}) \right\| \\ = \| \left(\mathbf {I} + \mathbf {R} ^ {\top} (\mathbf {A} - \mathbf {R})\right) (h - h _ {0}) \| \\ = \| (\mathbf {I} + \mathbf {H}) (h - h _ {0}) \| \\ < \varepsilon (t). \\ \end{array}
410
+ $$
411
+
412
+ Since $\pmb{h}$ can take any direction, we know $\pmb{h} - \pmb{h}_0$ can be an arbitrary vector that has norm no greater than $\varepsilon(\pmb{h}_0)$ . Let $\pmb{h} - \pmb{h}_0$ takes the direction of the first right singular vector, then
413
+
414
+ $$
415
+ \left\| (\mathbf {I} + \mathbf {H}) \left(\boldsymbol {h} - \boldsymbol {h} _ {0}\right) \right\| = \sigma_ {\max } (\mathbf {I} + \mathbf {H}) \| \boldsymbol {h} - \boldsymbol {h} _ {0} \| < \varepsilon (\boldsymbol {t}).
416
+ $$
417
+
418
+ This implies that the operator norm of $\mathbf{I} + \mathbf{H}$ , denoted by $\sigma_{\mathrm{max}}$ , is upper bounded by
419
+
420
+ $$
421
+ \sigma_ {\max } (\mathbf {I} + \mathbf {H}) \leq \frac {\varepsilon (\boldsymbol {t})}{\varepsilon (\boldsymbol {h} _ {0})}.
422
+ $$
423
+
424
+ By triangle inequality of the operator norm (Belitskii et al., 2013), we further know
425
+
426
+ $$
427
+ \sigma_ {\max } (\mathbf {H}) = \sigma_ {\max } (\mathbf {I} + \mathbf {H} - \mathbf {I}) \leq \sigma_ {\max } (\mathbf {I} + \mathbf {H}) + \sigma_ {\max } (\mathbf {I}) \leq \frac {\varepsilon (\mathbf {t})}{\varepsilon (\mathbf {h} _ {0})} + 1.
428
+ $$
429
+
430
+ Now, for any irrelevant $h_{\mathrm{ir}}$ , we have
431
+
432
+ $$
433
+ \begin{array}{l} \left\| \Phi \left(\boldsymbol {h} _ {\mathrm {i r}}\right) - \boldsymbol {h} _ {\mathrm {i r}} \right\| \\ = \left\| \mathbf {H} \boldsymbol {h} _ {\mathrm {i r}} + \mathbf {R} ^ {\top} \boldsymbol {b} \right\| \\ \stackrel {(a)} {=} \left\| \mathbf {H} h _ {\mathrm {i r}} + \left(\boldsymbol {t} - \boldsymbol {h} _ {0}\right) - \mathbf {H} h _ {0} \right\| \\ = \| \mathbf {H} \left(\boldsymbol {h} _ {\mathrm {i r}} - \boldsymbol {h} _ {0}\right) + \left(\boldsymbol {t} - \boldsymbol {h} _ {0}\right) \| \\ \end{array}
434
+ $$
435
+
436
+ $$
437
+ \stackrel {(b)} {\geq} \left| \| (\boldsymbol {t} - \boldsymbol {h} _ {0}) \| - \| \mathbf {H} \left(\boldsymbol {h} _ {\text {i r}} - \boldsymbol {h} _ {0}\right) \| \right|, \tag {†}
438
+ $$
439
+
440
+ where $(a)$ substitutes
441
+
442
+ $$
443
+ \boldsymbol {t} - \boldsymbol {h} _ {0} = \mathbf {H} \boldsymbol {h} _ {0} + \mathbf {R} ^ {\top} \boldsymbol {b},
444
+ $$
445
+
446
+ and $(b)$ holds from the reverse triangle inequality.
447
+
448
+ When the irrelevant $h_{\mathrm{ir}}$ has a small stable-ball radius,
449
+
450
+ $$
451
+ \varepsilon \left(\boldsymbol {h} _ {\mathrm {i r}}\right) < \frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {t}) + \varepsilon \left(\boldsymbol {h} _ {0}\right))}{2 \varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})} \varepsilon \left(\boldsymbol {h} _ {0}\right),
452
+ $$
453
+
454
+ and is close to $h_0$ such that
455
+
456
+ $$
457
+ \left\| \boldsymbol {h} _ {\mathrm {i r}} - \boldsymbol {h} _ {0} \right\| = \varepsilon (\boldsymbol {h} _ {\mathrm {i r}}) + \varepsilon (\boldsymbol {h} _ {0}),
458
+ $$
459
+
460
+ we have
461
+
462
+ $$
463
+ \begin{array}{l} \left\| \mathbf {H} \left(\boldsymbol {h} _ {\mathrm {i r}} - \boldsymbol {h} _ {0}\right) \right\| \leq \sigma_ {\max } (\mathbf {H}) \left\| \boldsymbol {h} _ {\mathrm {i r}} - \boldsymbol {h} _ {0} \right\| \\ \leq \left(\frac {\varepsilon (\boldsymbol {t})}{\varepsilon (\boldsymbol {h} _ {0})} + 1\right) \left(\varepsilon \left(\boldsymbol {h} _ {\mathrm {i r}}\right) + \varepsilon \left(\boldsymbol {h} _ {0}\right)\right) \\ \leq \left(\frac {\varepsilon (\boldsymbol {t}) + \varepsilon (\boldsymbol {h} _ {0})}{\varepsilon (\boldsymbol {h} _ {0})}\right) \\ \times \left(\varepsilon (\boldsymbol {h} _ {0}) \frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t}))}{2 \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})} + \varepsilon (\boldsymbol {h} _ {0})\right) \\ \stackrel {(a)} {< } \left(\frac {\varepsilon (\boldsymbol {t}) + \varepsilon (\boldsymbol {h} _ {0})}{\varepsilon (\boldsymbol {h} _ {0})}\right) \\ \times \left(\varepsilon (\boldsymbol {h} _ {0}) \frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t}))}{\varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})} + \varepsilon (\boldsymbol {h} _ {0})\right) \\ = \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})) \\ + \left(\varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})\right) \\ = \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| \\ \end{array}
464
+ $$
465
+
466
+ where $(a)$ holds from the fact that $\varepsilon(h_0) > 0$ , so dropping one $\varepsilon(h_0)$ in the denominator provides a valid upper bound. Therefore, we can safely remove the absolute value function in Eqn ( $\dagger$ ) and get
467
+
468
+ $$
469
+ \begin{array}{l} \left\| \Phi \left(\mathbf {h} _ {\mathrm {i r}}\right) - \mathbf {h} _ {\mathrm {i r}} \right\| = \left\| \mathbf {H} \mathbf {h} _ {\mathrm {i r}} + \mathbf {R} ^ {\top} \mathbf {b} \right\| \\ \geq \left| \left| \left| \boldsymbol {t} - \boldsymbol {h} _ {0} \right| \right| - \left| \left| \mathbf {H} \left(\boldsymbol {h} _ {\mathrm {i r}} - \boldsymbol {h} _ {0}\right) \right| \right| \right| \\ = \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - \| \mathbf {H} (\boldsymbol {h} _ {\mathrm {i r}} - \boldsymbol {h} _ {0}) \| \\ \geq \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - \sigma_ {\max } (\mathbf {H}) \| \boldsymbol {h} _ {\text {i r}} - \boldsymbol {h} _ {0} \| \\ \geq \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - \left(\frac {\varepsilon (\boldsymbol {t})}{\varepsilon (\boldsymbol {h} _ {0})} + 1\right) \left(\varepsilon \left(\boldsymbol {h} _ {\mathrm {i r}}\right) + \varepsilon \left(\boldsymbol {h} _ {0}\right)\right) \\ \geq \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - \left(\frac {\varepsilon (\boldsymbol {t}) + \varepsilon (\boldsymbol {h} _ {0})}{\varepsilon (\boldsymbol {h} _ {0})}\right) \\ \times \left(\frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t}))}{2 \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})} \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {h} _ {0})\right) \\ = \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - \left(\frac {\varepsilon (\boldsymbol {t}) + \varepsilon (\boldsymbol {h} _ {0})}{\varepsilon (\boldsymbol {h} _ {0})}\right) \\ \times \left(\frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})) + 2 \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})}{2 \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})} \varepsilon (\boldsymbol {h} _ {0})\right) \\ = \| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {t}) + \varepsilon (\boldsymbol {h} _ {0})) \left(\frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| + \varepsilon (\boldsymbol {h} _ {0})}{2 \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})}\right). \\ \end{array}
470
+ $$
471
+
472
+ Finally, it is easy to verify that this term is an upper bound of $\varepsilon(h_{\mathrm{ir}})$ , since
473
+
474
+ $$
475
+ \begin{array}{l} \left\| \Phi \left(\boldsymbol {h} _ {\mathrm {i r}}\right) - \boldsymbol {h} _ {\mathrm {i r}} \right\| - \varepsilon \left(\boldsymbol {h} _ {\mathrm {i r}}\right) = \left\| \mathbf {H} \boldsymbol {h} _ {\mathrm {i r}} + \mathbf {R} ^ {\top} \boldsymbol {b} \right\| - \varepsilon \left(\boldsymbol {h} _ {\mathrm {i r}}\right) \\ \geq \binom {a} {\varepsilon} \left(\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {t}) + \varepsilon (\boldsymbol {h} _ {0})) \left(\frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| + \varepsilon (\boldsymbol {h} _ {0})}{2 \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})}\right)\right) \\ - \left(\frac {\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - (\varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t}))}{2 \varepsilon (\boldsymbol {h} _ {0}) + \varepsilon (\boldsymbol {t})} \varepsilon (\boldsymbol {h} _ {0})\right) \\ = \frac {1}{2 \varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})} \left(\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| \left(2 \varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})\right) \right. \\ - \left(\left\| \boldsymbol {t} - \boldsymbol {h} _ {0} \right\| + \varepsilon \left(\boldsymbol {h} _ {0}\right)\right) \left(\varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})\right) \\ \left. - \left(\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| - \left(\varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})\right)\right) \varepsilon \left(\boldsymbol {h} _ {0}\right)\right) \\ = \frac {1}{2 \varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})} \left(\| \boldsymbol {t} - \boldsymbol {h} _ {0} \| (2 \varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t}) - \varepsilon \left(\boldsymbol {h} _ {0}\right) - \varepsilon (\boldsymbol {t})) \right. \\ \left. - \left(\varepsilon \left(\boldsymbol {h} _ {0}\right) \left(\varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})\right) - \varepsilon \left(\boldsymbol {h} _ {0}\right) \left(\varepsilon \left(\boldsymbol {h} _ {0}\right) + \varepsilon (\boldsymbol {t})\right)\right)\right) \\ = 0, \\ \end{array}
476
+ $$
477
+
478
+ where $(a)$ applies the lower bound to the first term, and the upper bound to the second term. In conclusion, we have
479
+
480
+ $$
481
+ \left\| \Phi \left(\boldsymbol {h} _ {\mathrm {i r}}\right) - \boldsymbol {h} _ {\mathrm {i r}} \right\| \geq \varepsilon \left(\boldsymbol {h} _ {\mathrm {i r}}\right),
482
+ $$
483
+
484
+ i.e., $\Phi(h_{\mathrm{ir}}) \notin B(h_{\mathrm{ir}}, \varepsilon(h_{\mathrm{ir}}))$ . This completes our proof.
485
+
486
+ ![](images/2017f66b6d89360ea1e2fd73ab5fb9327f9ba4ecfefb6f1dbfea70a60c49e376.jpg)
487
+
488
+ # B.2 MORE DISCUSSIONS ON ASMP 2.1.
489
+
490
+ Our Thm 2.3 is built upon Asmp 2.1. Informally, It assumes that the knowledge can be generated if representation takes some specific value. While this assumption may not hold especially in challenging scenarios (see App A for more discussions), it is reasonable for Thm 2.3.
491
+
492
+ Particularly, The goal of Thm 2.3 is to reveal how linearity in ReFT can inevitably hurt locality, even if it appears successful in editing. Therefore, our focus is on cases where ReFT is capable of conducting the edits. The existence of such cases are confirmed by our experiments, and by its effectiveness in diverse post-training tasks as demonstrated in Wu et al. (2024). Presuming such a success, given that ReFT can only update representations, Asmp 2.1 assumes that by updating representations to some targeted (possibly unknown) value, ReFT steers output $y$ to convey the desired knowledge.
493
+
494
+ # B.3 PROOF OF LEM 2.4
495
+
496
+ Lemma B.4. Let $\mathbf{R} = [r_1; \ldots; r_r], \mathbf{A} = [a_1, \ldots, a_r], \pmb{b}^\top = (b_1, \ldots, b_k)$ , and $\mathbf{W}(\pmb{h}) = \text{diag}(w_1(\pmb{h}), \ldots, w_r(\pmb{h}))$ . Then BaFT
497
+
498
+ $$
499
+ \Phi (\boldsymbol {h}) = \boldsymbol {h} + \sum_ {k = 1} ^ {r} w _ {k} (\boldsymbol {h}) \boldsymbol {r} _ {k} \left(\boldsymbol {a} _ {k} ^ {\top} \boldsymbol {h} + b _ {k} - \boldsymbol {r} _ {k} ^ {\top} \boldsymbol {h}\right),
500
+ $$
501
+
502
+ can be expressed in a matrix form
503
+
504
+ $$
505
+ \Phi (\boldsymbol {h}) = \boldsymbol {h} + \mathbf {R} ^ {\top} \mathbf {W} (\boldsymbol {h}) (\mathbf {A h} + \boldsymbol {b} - \mathbf {R}).
506
+ $$
507
+
508
+ When using constant weighting $\mathbf{W}(\pmb{h}) = \mathbf{I}$ , BaFT becomes to ReFT. Otherwise, rows of WR are not orthonormal, making BaFT and ReFT nonequivalent.
509
+
510
+ Table 5: Hyper-parameters of different methods. For baselines, we only provided settings that were different from Wang et al. (2024e).
511
+
512
+ <table><tr><td rowspan="2"></td><td rowspan="2">HParams.</td><td>LLaMA 2-7b(-chat)</td><td>LLaMA 3-8b-Instruct</td><td>Gemma 1.1-7b-Instruct</td></tr><tr><td>Value</td><td>Value</td><td>Value</td></tr><tr><td>FT-L</td><td>/</td><td colspan="3">Following Wang et al. (2024e)&#x27;s recommendation for LLaMA 2.</td></tr><tr><td>ROME</td><td>/</td><td colspan="3">Following Wang et al. (2024e)&#x27;s recommendation for LLaMA 2.</td></tr><tr><td>MEMIT</td><td>/</td><td colspan="2">Following Wang et al. (2024e)&#x27;s recommendation for LLaMA 2.</td><td>/</td></tr><tr><td>AdaLoRA</td><td>Maximum Steps</td><td colspan="3">70 for Single and Continual Editing; 200 for Batched Editing</td></tr><tr><td rowspan="2">GRACE</td><td>Maximum Steps</td><td>100</td><td>250</td><td>100</td></tr><tr><td>Lay. to Interven</td><td>27</td><td>27</td><td>24</td></tr><tr><td>WISElight</td><td>Param. Updates</td><td colspan="3">Restrict the original WISE logic to a randomly selected 1/8 area.</td></tr><tr><td rowspan="7">BaFT &amp; ReFT</td><td>Subspace Rank</td><td colspan="3">12</td></tr><tr><td>Pos. to Intervene</td><td colspan="3">Last 3 of Input + Output</td></tr><tr><td>Lay. to Intervene</td><td>9;18;24;28</td><td>9;18;24;28</td><td>18;20;22;24</td></tr><tr><td>Learning Rate</td><td colspan="3">3e-4 for Single and Continual Editing; 1e-4 for Batched Editing</td></tr><tr><td>Maximum Steps</td><td colspan="3">40 for Single and Continual Editing; 70 for Batched Editing</td></tr><tr><td>Locality Reg. (BaFT)</td><td>α = 0.01, β = 0.05, γ = 0.02</td><td>α = 0.01, β = 0.1, γ = 0.05</td><td>α = 0.01, β = 0.1, γ = 0.05</td></tr><tr><td>Maximum Steps</td><td colspan="3">40 for Single and Continual Editing; 70 for Batched Editing</td></tr></table>
513
+
514
+ Proof. The derivations essentially come from the fact that matrix product can be expressed by summation of outer products. In particular, we have
515
+
516
+ $$
517
+ \begin{array}{l} \Phi (\boldsymbol {h}) = \boldsymbol {h} + \sum_ {k = 1} ^ {r} w _ {k} (\boldsymbol {h}) \boldsymbol {r} _ {k} \left(\boldsymbol {a} _ {k} ^ {\top} \boldsymbol {h} + b _ {k} - \boldsymbol {r} _ {k} ^ {\top} \boldsymbol {h}\right) \\ = \boldsymbol {h} + \left(\sum_ {k = 1} ^ {r} w _ {k} (\boldsymbol {h}) \boldsymbol {r} _ {k} \boldsymbol {a} _ {k} ^ {\top} - \sum_ {k = 1} ^ {r} w _ {k} (\boldsymbol {h}) \boldsymbol {r} _ {k} \boldsymbol {r} _ {k} ^ {\top}\right) \boldsymbol {h} + \sum_ {k = 1} ^ {r} w _ {k} (\boldsymbol {h}) \boldsymbol {r} _ {k} b _ {k} \\ = h + \left(\mathbf {R} ^ {\top} \mathbf {W} (h) \mathbf {A} - \mathbf {R} ^ {\top} \mathbf {W} (h) \mathbf {R}\right) h + \mathbf {R} ^ {\top} \mathbf {W} (h) b \\ = \boldsymbol {h} + \mathbf {R} ^ {\top} \mathbf {W} (\boldsymbol {h}) ((\mathbf {A} - \mathbf {R}) \boldsymbol {h} + \boldsymbol {b}) \\ = \boldsymbol {h} + \mathbf {R} ^ {\top} \mathbf {W} (\boldsymbol {h}) (\mathbf {A} \boldsymbol {h} + \boldsymbol {b} - \mathbf {R}), \\ \end{array}
518
+ $$
519
+
520
+ when $\mathbf{W}(h) = \mathbf{I}$ takes the identity matrix, BaFT reduces to ReFT. This completes the proof.
521
+
522
+ ![](images/59cde5495506f654c65e89e4c19ae19d0089c4851ddfae85690f6f237a458eef.jpg)
523
+
524
+ # C IMPLEMENTATION DETAILS
525
+
526
+ We provide more implementation details about different methods.
527
+
528
+ Throughout all experiments, BaFT used a logistic regression for $w_{k}(\pmb{h})$ for all $k \in [r]$ . ReFT was implemented as a special case of BaFT with constant weight $\mathbf{W} = \mathbf{I}$ . Load balancing loss and the optional Locality regularization were removed as they were defined for weights. In addition, BaFT and ReFT used the same optimizer AdamW (Loshchilov & Hutter, 2019) and learning rate. An early stopping was performed if the training loss is smaller than a pre-specified threshold 0.01. We also added this early stopping to AdaLoRA after observing a similar improvement. We kept encountering numeric issues when running MEMIT on Gemma, so we omitted these results.. Other hyper-parameters are reported in Tab 5.
529
+
530
+ # D MORE EXPERIMENT RESULTS
531
+
532
+ # D.1 COMPLETE BATCHED CONTINUAL EDITING RESULTS
533
+
534
+ Here we report the complete Batched Editing results in Tab 6 and Tab 7 using batch size 10 and 50 respectively. The averaged result are shown in Fig 3. We noted that such a batched setting makes knowledge editing resemble more conventional continual learning (Miao et al., 2021; Chen et al., 2023; Wang et al., 2024c).
535
+
536
+ Table 6: Batched Editing performance on ZsRE dataset, evaluated after conducting $T$ times of editing with batch size 10 sequentially. Best Avg. results are in bold and second best are underlined.
537
+
538
+ <table><tr><td></td><td colspan="4">T=1</td><td colspan="4">T=10</td><td colspan="4">T=100</td></tr><tr><td></td><td colspan="12">LLaMA 2-7b</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>MEMIT</td><td>0.89</td><td>0.84</td><td>0.97</td><td>0.90</td><td>0.87</td><td>0.82</td><td>0.93</td><td>0.87</td><td>0.04</td><td>0.04</td><td>0.02</td><td>0.03</td></tr><tr><td>FT-L</td><td>0.43</td><td>0.42</td><td>0.87</td><td>0.57</td><td>0.12</td><td>0.10</td><td>0.17</td><td>0.13</td><td>0.03</td><td>0.03</td><td>0.00</td><td>0.02</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.85</td><td>0.88</td><td>0.91</td><td>0.95</td><td>0.82</td><td>0.87</td><td>0.88</td><td>0.46</td><td>0.45</td><td>0.77</td><td>0.56</td></tr><tr><td>ReFT</td><td>0.94</td><td>0.86</td><td>0.86</td><td>0.89</td><td>0.92</td><td>0.83</td><td>0.86</td><td>0.87</td><td>0.64</td><td>0.60</td><td>0.76</td><td>0.67</td></tr><tr><td>BaFT (Ours)</td><td>0.93</td><td>0.84</td><td>0.95</td><td>0.91</td><td>0.92</td><td>0.83</td><td>0.95</td><td>0.90</td><td>0.59</td><td>0.55</td><td>0.98</td><td>0.71</td></tr><tr><td></td><td colspan="12">LLaMA 3-8b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>MEMIT</td><td>0.91</td><td>0.85</td><td>0.96</td><td>0.91</td><td>0.84</td><td>0.78</td><td>0.87</td><td>0.83</td><td>0.06</td><td>0.06</td><td>0.03</td><td>0.05</td></tr><tr><td>FT-L</td><td>0.33</td><td>0.32</td><td>0.53</td><td>0.39</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.01</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.88</td><td>0.76</td><td>0.88</td><td>0.94</td><td>0.83</td><td>0.75</td><td>0.84</td><td>0.34</td><td>0.34</td><td>0.75</td><td>0.48</td></tr><tr><td>ReFT</td><td>0.92</td><td>0.82</td><td>0.65</td><td>0.80</td><td>0.89</td><td>0.78</td><td>0.64</td><td>0.77</td><td>0.46</td><td>0.43</td><td>0.44</td><td>0.44</td></tr><tr><td>BaFT (Ours)</td><td>0.92</td><td>0.82</td><td>0.83</td><td>0.86</td><td>0.90</td><td>0.78</td><td>0.85</td><td>0.84</td><td>0.43</td><td>0.40</td><td>0.95</td><td>0.59</td></tr><tr><td></td><td colspan="12">Gemma 1.1-7b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>FT-L</td><td>0.04</td><td>0.04</td><td>0.02</td><td>0.03</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.01</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.87</td><td>0.83</td><td>0.90</td><td>0.93</td><td>0.81</td><td>0.82</td><td>0.85</td><td>0.34</td><td>0.34</td><td>0.59</td><td>0.42</td></tr><tr><td>ReFT</td><td>0.90</td><td>0.75</td><td>0.86</td><td>0.84</td><td>0.88</td><td>0.72</td><td>0.84</td><td>0.81</td><td>0.48</td><td>0.44</td><td>0.69</td><td>0.54</td></tr><tr><td>BaFT (Ours)</td><td>0.90</td><td>0.74</td><td>0.91</td><td>0.85</td><td>0.89</td><td>0.73</td><td>0.90</td><td>0.84</td><td>0.45</td><td>0.41</td><td>0.87</td><td>0.58</td></tr></table>
539
+
540
+ Table 7: Batched Editing performance on ZsRE dataset, evaluated after conducting $T$ times of editing with batch size 50 sequentially. Best Avg. results are in bold.
541
+
542
+ <table><tr><td></td><td colspan="4">T=1</td><td colspan="4">T=10</td><td colspan="4">T=20</td></tr><tr><td></td><td colspan="12">LLaMA 2-7b</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>MEMIT</td><td>0.87</td><td>0.82</td><td>0.91</td><td>0.87</td><td>0.45</td><td>0.43</td><td>0.46</td><td>0.45</td><td>0.03</td><td>0.03</td><td>0.02</td><td>0.03</td></tr><tr><td>FT-L</td><td>0.39</td><td>0.39</td><td>0.63</td><td>0.47</td><td>0.13</td><td>0.10</td><td>0.07</td><td>0.10</td><td>0.07</td><td>0.05</td><td>0.02</td><td>0.05</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.86</td><td>0.76</td><td>0.87</td><td>0.78</td><td>0.69</td><td>0.79</td><td>0.75</td><td>0.51</td><td>0.51</td><td>0.76</td><td>0.59</td></tr><tr><td>ReFT</td><td>0.90</td><td>0.77</td><td>0.85</td><td>0.84</td><td>0.80</td><td>0.69</td><td>0.82</td><td>0.77</td><td>0.60</td><td>0.56</td><td>0.74</td><td>0.63</td></tr><tr><td>BaFT (Ours)</td><td>0.92</td><td>0.78</td><td>0.89</td><td>0.86</td><td>0.80</td><td>0.69</td><td>0.90</td><td>0.80</td><td>0.62</td><td>0.57</td><td>0.92</td><td>0.70</td></tr><tr><td></td><td colspan="12">LLaMA 3-8b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>MEMIT</td><td>0.88</td><td>0.83</td><td>0.89</td><td>0.87</td><td>0.45</td><td>0.42</td><td>0.46</td><td>0.44</td><td>0.00</td><td>0.00</td><td>0.04</td><td>0.01</td></tr><tr><td>FT-L</td><td>0.32</td><td>0.29</td><td>0.42</td><td>0.34</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.01</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>AdaLoRA</td><td>1.00</td><td>0.83</td><td>0.67</td><td>0.83</td><td>0.74</td><td>0.63</td><td>0.63</td><td>0.67</td><td>0.42</td><td>0.40</td><td>0.69</td><td>0.50</td></tr><tr><td>ReFT</td><td>0.92</td><td>0.74</td><td>0.52</td><td>0.73</td><td>0.75</td><td>0.61</td><td>0.49</td><td>0.62</td><td>0.48</td><td>0.42</td><td>0.43</td><td>0.44</td></tr><tr><td>BaFT (Ours)</td><td>0.92</td><td>0.75</td><td>0.62</td><td>0.76</td><td>0.75</td><td>0.61</td><td>0.72</td><td>0.69</td><td>0.49</td><td>0.43</td><td>0.78</td><td>0.57</td></tr><tr><td></td><td colspan="12">Gemma 1.1-7b-Instruct</td></tr><tr><td></td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td><td>Rel.</td><td>Gen.</td><td>Loc.</td><td>Avg.</td></tr><tr><td>FT-L</td><td>0.02</td><td>0.02</td><td>0.01</td><td>0.02</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.01</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>AdaLoRA</td><td>0.13</td><td>0.08</td><td>0.02</td><td>0.08</td><td>0.08</td><td>0.06</td><td>0.02</td><td>0.05</td><td>0.03</td><td>0.03</td><td>0.00</td><td>0.02</td></tr><tr><td>ReFT</td><td>0.88</td><td>0.68</td><td>0.79</td><td>0.78</td><td>0.71</td><td>0.57</td><td>0.72</td><td>0.67</td><td>0.48</td><td>0.42</td><td>0.61</td><td>0.50</td></tr><tr><td>BaFT (Ours)</td><td>0.87</td><td>0.68</td><td>0.87</td><td>0.81</td><td>0.72</td><td>0.57</td><td>0.83</td><td>0.71</td><td>0.50</td><td>0.45</td><td>0.81</td><td>0.59</td></tr></table>
543
+
544
+ # D.2 DOWNSSTREAM LOCALITY PERFORMANCE
545
+
546
+ In this section we study how different editing methods affect the LLM's performance on unrelated downstream task, as an additional measure of locality. To this end, we follow Yao et al. (2023) and evaluate how the LLM's ability of answering PIQA questions from Bisk et al. (2020) that are unrelated to the editing. The correctness is measured by whether the LLM chooses the correct answer according to its perplexity. For more details we refer the readers to Yao et al. (2023).
547
+
548
+ Table 8: Downstream task (PIQA) performance after being edited with 100 ZsRE knowledge. LLM uses LLaMA-2.
549
+
550
+ <table><tr><td rowspan="2">PIQA Accu.</td><td>Base</td><td>AdaLoRA</td><td>FT-L</td><td>ROME</td><td>MEMIT</td><td>MELO</td><td>WISElight</td><td>ReFT</td><td>BaFT</td></tr><tr><td>0.77</td><td>0.48</td><td>0.75</td><td>0.5</td><td>0.52</td><td>0.77</td><td>0.77</td><td>0.77</td><td>0.77</td></tr></table>
551
+
552
+ # D.3 MORE DISCUSSION ON BAFT VS WISE
553
+
554
+ In our experiment, we note that BaFT achieves better parameter efficiency and speeds, at a cost of slightly lower performance, resulting in an efficiency-effectiveness trade-off. Notably, this efficiency of BaFT can be valuable in applications that require frequent knowledge updates.
555
+
556
+ In order to improve the effectiveness of BaFT, one possible solution is to use more parameters, given that BaFT parameter efficiency is already much higher than state-of-the-art baseline WISE. As discussed in Sec 3.3, when WISE's parameters number is reduced from $\mathrm{WISE_{full}}$ , $\mathrm{WISE_{light}}$ , its performance degrades drastically. In comparison, BaFT uses even much less parameters, but maintains a highly comparable performance. Given this, we expect that using better training hyperparameters such as learning rate to make mild performance improvement, and more parameters are needed.
557
+
558
+ To validate this, we tried to add intervention to all layers (a common practice in ReFT (Wu et al., 2024)) and increase the subspace rank to 16. This made BaFT performance on editing LLaMA-2 with 100 ZsRE knowledge increased from 0.80 (Rel: 0.73, Gen: 0.68, Loc: 0.98) to 0.82 (Rel: 0.77, Gen: 0.73, Loc: 0.95). However, we noted that going higher subspace rank didn't help.
559
+
560
+ Therefore, we conjecture that to build larger BaFT (and ReFT), we need to incorporate sparsity on basis activation as well. This can help alleviate unintentional parameter updates as in GRACE (Hartvigsen et al., 2024) and WISE (Wang et al., 2024d). In addition, such a sparsity opens the door of automating position selections: as when all bases are inactivated, BaFT makes no updates on the representation, which is equivalent to dropping the position from the fine-tuning process. We plan to explore this direction in our future work.
2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d0d131f3f18bb77e401d05b7a240651ab5c3c395e2e04f3478adce711f4d33c
3
+ size 1222295
2025/Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/4be298ba-d505-43be-9230-ca3e8692d35e_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/4be298ba-d505-43be-9230-ca3e8692d35e_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/4be298ba-d505-43be-9230-ca3e8692d35e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0a0477262c96d9ca5a7062451a9e651edf9020ff40348937749d0fa29794ebf
3
+ size 2359781
2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20895d9a27588652872a0207d31d914301ba3a2e2385b54c9456ba5dcbec8915
3
+ size 4324074
2025/Unlocking Global Optimality in Bilevel Optimization_ A Pilot Study/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/1210ff35-9465-44cf-b6f5-91533149c936_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/1210ff35-9465-44cf-b6f5-91533149c936_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/1210ff35-9465-44cf-b6f5-91533149c936_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00631846be3f9e9bf06627caa278700955d5d1daef668506f7bd90a450982907
3
+ size 8469948
2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2cf46a90bd1a67da7add95cb4bcb98b56b2dea642679fc73660e2c826d43ae4
3
+ size 2540843
2025/Unlocking Guidance for Discrete State-Space Diffusion and Flow Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Point Processes through Point Set Diffusion/e3770118-f325-4609-a8ac-be9346630a7e_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Point Processes through Point Set Diffusion/e3770118-f325-4609-a8ac-be9346630a7e_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking Point Processes through Point Set Diffusion/e3770118-f325-4609-a8ac-be9346630a7e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3405ca4d954e596185a3ec064e6e29e1e34fd1d6084d8a48c1d5c22d8e10d4e5
3
+ size 3822910
2025/Unlocking Point Processes through Point Set Diffusion/full.md ADDED
@@ -0,0 +1,595 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # UNLOCKING POINT PROCESSES THROUGH POINT SET DIFFUSION
2
+
3
+ David Lüdke* Enric Rabasseda Raventós* Marcel Kollovieh, Stephan Gunnemann
4
+
5
+ Department of Informatics & Munich Data Science Institute
6
+
7
+ Technical University of Munich, Germany
8
+
9
+ {d.luedke,e.rabasseda,m.kollovieh,s.guennemann}@tum.de
10
+
11
+ # ABSTRACT
12
+
13
+ Point processes model the distribution of random point sets in mathematical spaces, such as spatial and temporal domains, with applications in fields like seismology, neuroscience, and economics. Existing statistical and machine learning models for point processes are predominantly constrained by their reliance on the characteristic intensity function, introducing an inherent trade-off between efficiency and flexibility. In this paper, we introduce POINT SET DIFFUSION, a diffusion-based latent variable model that can represent arbitrary point processes on general metric spaces without relying on the intensity function. By directly learning to stochastically interpolate between noise and data point sets, our approach effectively captures the distribution of point processes and enables efficient, parallel sampling and flexible generation for complex conditional tasks. Experiments on synthetic and real-world datasets demonstrate that POINT SET DIFFUSION achieves state-of-the-art performance in unconditional and conditional generation of spatial and spatiotemporal point processes while providing up to orders of magnitude faster sampling.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Point processes describe the distribution of point sets in a mathematical space where the location and number of points are random. On Euclidean spaces, point processes (e.g., spatial and/or temporal; SPP, STPP, TPP) have been widely used to model events and entities in space and time, such as earthquakes, neural activity, transactions, and social media posts.
18
+
19
+ Point processes can exhibit complex interactions between points, leading to correlations that are hard to capture effectively (Daley & Vere-Jones, 2007). The distribution of points is typically characterized by a non-negative intensity function, representing the expected number of events in a bounded region of space (Daley et al., 2003). A common approach to modeling point processes on general metric spaces is to parameterize an inhomogeneous in
20
+
21
+ tensity as a function of space. However, this approach assumes independence between points, which restricts its ability to model complex interactions and hinders generalization across different point sets (Daley et al., 2003; Daley & Vere-Jones, 2007).
22
+
23
+ ![](images/dcc8dba70265023d12919394878a8080dd32c51c96c172fceab3416ca34bf0b1.jpg)
24
+ Figure 1: Illustration of POINT SET DIFFUSION for earthquakes in Japan. The forward process stochastically interpolates between the original data point set $X_0$ and a noise point set $X_T$ , progressively removing data points and adding noise points. To generate new samples from the data distribution, we approximate the reverse posterior $q(X_t | X_0, X_{t+1})$ and add approximate data points and remove noise points.
25
+
26
+ For ordered spaces like time (STPP, TPP), the predominant approach is to model the conditional intensity autoregressively, conditioning each point on its past, allowing temporal causal dependencies, which can be conveniently captured by state-of-the-art machine learning models (Shchur et al., 2021). While this enables point interactions, these models rely on likelihood-based training and sequential sampling, which require integrating the intensity function over the entire space. Ultimately, this constrains models, as it either necessitates oversimplified parameterizations that restrict point dependencies and introduce smoothness (Ozaki, 1979; Ogata, 1998; Zhou & Yu, 2023), or approximations using amortized inference (Zhou et al., 2022), numerical (Chen et al., 2020), or Monte Carlo methods (Hong & Shelton, 2022). Thus, capturing complex point dependencies and sampling from point processes, particularly on general metric spaces, remains an open research challenge.
27
+
28
+ Ludke et al. (2023) overcame the limitations of the conditional intensity function for temporal point processes by proposing ADD-THIN, a diffusion model for TPPs based on the thinning and superposition property of TPPs directly modeling entire event sequences. In this paper, we generalize this idea to point processes on general metric spaces and derive a diffusion-based latent variable model, POINT SET DIFFUSION, that directly learns to model the stochastic interpolation between a data point set and samples from any noise point process (see Figure 1). Furthermore, we show how to generate conditional samples with our unconditional POINT SET DIFFUSION model to solve arbitrary conditioning tasks on general metric spaces. Our experiments demonstrate that POINT SET DIFFUSION achieves state-of-the-art results on conditional and unconditional tasks for SPPs, TPPs and STPPs. Our contributions can be summarized as follows:
29
+
30
+ - We derive a diffusion-based latent variable model that captures the complex distribution of point processes on general metric spaces by learning stochastic interpolations between data and noise point sets.
31
+ - Our model enables efficient and parallel sampling of point sets while supporting flexible conditioning through binary functions on the metric space.
32
+ - We propose a model-agnostic evaluation framework for assessing generative point process models on Euclidean spaces.
33
+ - Our method achieves state-of-the-art results for conditional and unconditional generation of SPPs, TPPs, and STPPs while offering orders of magnitude faster sampling.
34
+
35
+ # 2 BACKGROUND
36
+
37
+ # 2.1 POINT PROCESSES
38
+
39
+ A point process (Daley et al., 2003) is a stochastic process where realizations consist of finite sets of points randomly located in a mathematical space. More formally, let $(D,d)$ be a complete, separable metric space equipped with its Borel $\sigma$ -algebra $\mathcal{B}$ . A point process on $D$ is a mapping $X$ from a probability space $(\Omega ,\mathcal{A},\mathcal{P})$ into $N^{lf}$ , the set of all possible point configurations, such that for any bounded Borel set $A\subseteq D$ , the number of points in $A$ , denoted by $N(A)$ , is a finite random variable.
40
+
41
+ Given a realization of the point process $X = \{\pmb{x}_i \in D\}_{1 \leq i \leq n}$ , where $n$ is the number of points, the number of points in a region is expressed as the counting measure $N(A) = \sum_{i=1}^{n} \mathbf{1}\{\pmb{x}_i \in A\}$ . Here, we assume the point process is simple, i.e., almost surely $N(\{\pmb{x}_i\}) \leq 1$ for all $\pmb{x}_i \in D$ , meaning no two points coincide. Point processes are commonly characterized by their intensity function, which is defined through the following random measure:
42
+
43
+ $$
44
+ A \mapsto \mu (A) := \mathbb {E} [ N (A) ] = \int_ {A} \lambda (\boldsymbol {x}) \mathrm {d} \boldsymbol {x}, \tag {1}
45
+ $$
46
+
47
+ where $\mu(A)$ represents the expected number of points in a region $A$ . Then, a point process is said to have intensity $\lambda$ if the measure $\mu$ above has a density $\lambda$ with respect to the Lebesgue measure $\mu(A)$ . Thus, the intensity function $\lambda(x)$ gives the expected number of points per unit volume in a small region of the Borel set $A \subseteq D$ .
48
+
49
+ The points in a realization $X$ can exhibit complex correlations, so the intensity function is nontrivial to parameterize. On a Euclidean space $\mathbb{R}$ we can specify the Papangelou intensity (Daley et al., 2003):
50
+
51
+ $$
52
+ \lambda (\boldsymbol {x}) = \lim _ {\delta \rightarrow 0} \frac {P \left\{N \left(B _ {\delta} (\boldsymbol {x})\right) = 1 \mid C \left(N \left(\mathbb {R} \setminus B _ {\delta} (\boldsymbol {x})\right)\right)\right\}}{\left| B _ {\delta} (\boldsymbol {x}) \right|}, \tag {2}
53
+ $$
54
+
55
+ where $B_{\delta}(\pmb{x})$ is the ball centered at $x$ with a radius of $\delta$ , and $C(N(\mathbb{R} \setminus B_{\delta}(\pmb{x})))$ represents the information about the point process outside the ball. If the Euclidean space is ordered, for instance, representing time, the conditioning term would represent the history of all points prior to $x$ .
56
+
57
+ In general, effectively modeling and sampling from the conditional intensity (or related measures, e.g., hazard function or conditional density), for arbitrary metric spaces is a fundamental problem (Daley et al., 2003; Daley & Vere-Jones, 2007). This difficulty has led to a variety of simplified parametrizations that restrict the captured point interactions (Ozaki, 1979; Zhou & Yu, 2023; Daley et al., 2003; Daley & Vere-Jones, 2007); discretizations of the space (Ogata, 1998; Osama et al., 2019); and numerical or Monte Carlo approximations (Chen et al., 2020; Hong & Shelton, 2022).
58
+
59
+ In contrast, we propose a method that bypasses the abstract concept of a (conditional) intensity function by directly manipulating point sets through a latent variable model. Our approach leverages the following point process properties:
60
+
61
+ Superposition: Given two point processes $N_{1}$ and $N_{2}$ with intensities $\lambda_{1}$ and $\lambda_{2}$ respectively, we define the superposition of the point processes as $N = N_{1} + N_{2}$ or equivalently $X_{1} \bigcup X_{2}$ . Then, the resulting point process $N$ has intensity $\lambda = \lambda_{1} + \lambda_{2}$ . Independent thinning: Given a point process $N$ with intensity $\lambda$ , randomly removing each point with probability $p$ is equivalent to sampling points from a point process with intensity $(1 - p)\lambda$ .
62
+
63
+ # 2.2 DIFFUSION MODELS
64
+
65
+ Ho et al. (2020) and Sohl-Dickstein et al. (2015) introduced a new class of generative latent variable models - probabilistic denoising diffusion models. Conceptually, these models learn to reverse a probabilistic nosing process to generate new data and consist of three main components: a noising process, a denoising process, and a sampling procedure. The noising process is defined as a forward Markov chain $q(X_{t + 1}|X_t)$ , which progressively noises a data sample $X_0 \sim p_{\mathrm{data}}(X)$ over $T$ steps, eventually transforming it into a sample from a stationary noise distribution $X_T \sim p_{\mathrm{noise}}(X)$ . Then, the denoising process is learned to reverse the noising process by approximating the posterior $q(X_t|X_0,X_{t + 1})$ with a model $p_{\theta}(X_t|X_{t + 1})$ . Finally, the sampling procedure shows how to generate samples from the learned data distribution $p_{\theta}(X) = \int p_{\mathrm{noise}}(X_T)\prod_{t = 0}^{T - 1}p_{\theta}(X_t|X_{t + 1})\mathrm{d}X_1\dots \mathrm{d}X_T$ .
66
+
67
+ # 3 POINT SET DIFFUSION
68
+
69
+ In this section, we derive a diffusion-based latent variable model for point sets on general metric spaces by systematically applying the thinning and superposition properties of random sets. This approach allows direct manipulation of random point sets, avoiding the need for the abstract concept of an intensity function. We begin by outlining the forward noisig process in Section 3.1, which stochastically interpolates between point sets from the generating process and those from a noise distribution. Subsequently, we demonstrate how to learn to reverse this noising process to generate new random point sets in Section 3.2. Finally, in Section 3.3, we show how to sample from our unconditional model and generate conditional samples for general conditioning tasks on the metric space.
70
+
71
+ # 3.1 FORWARD PROCESS
72
+
73
+ Let $X_0 \sim p_{\mathrm{data}}(X)$ be an i.i.d. sample from the generating point process, and let $X_T \sim p_{\mathrm{noise}}(X)$ represent a sample from a noise point process. We define the forward process as a stochastic interpolation between the point sets $X_0$ and $X_T$ over $T$ steps. This process is modeled as a Markov chain $q(X_{t+1} | X_t)$ , where $X_t$ is the superposition of two random subsets: $X_t^{\mathrm{thin}} \subseteq X_0$ and $X_t^\epsilon \subseteq X_T$ . Specifically, $\forall t : X_t = X_t^{\mathrm{thin}} \cup X_t^\epsilon$ , where $X_t^{\mathrm{thin}}$ and $X_t^\epsilon$ are independent samples from a thinning and a noise process, respectively. We define the thinning and noise processes given two noise schedules $\{\alpha_t \in (0, 1)\}_{t=1}^T$ and $\{\beta_t \in (0, 1)\}_{t=1}^T$ as follows:
74
+
75
+ Thinning Process: This process progressively thins points in $X_0^{\mathrm{thin}} = X_0$ , removing signal over time. At every step $t + 1$ , each point $\pmb{x} \in X_t^{\mathrm{thin}}$ is independently thinned with probability $1 - \alpha_{t + 1}$ :
76
+
77
+ $$
78
+ q \left(\boldsymbol {x} \in X _ {t + 1} ^ {\text {t h i n}} \mid \boldsymbol {x} \in X _ {t} ^ {\text {t h i n}}\right) = \alpha_ {t + 1}. \tag {3}
79
+ $$
80
+
81
+ ![](images/51b30b3310cda92af2bd6be218eb8b2d0a0505bfccca5b0c6e75f2135d310d75.jpg)
82
+ Figure 2: The forward process is a Markov Chain $q(X_{t + 1}|X_t)$ , that stochastically interpolates a data sample $X_0$ with a noise point set $X_T$ over $T$ steps by applying a thinning and a noise process.
83
+
84
+ Consequently, the thinning defines $n$ independent Bernoulli chains, and the probability of any point $\pmb{x} \in X_0$ remaining in $X_{t}^{\mathrm{thin}}$ is:
85
+
86
+ $$
87
+ q \left(\boldsymbol {x} \in X _ {t} ^ {\text {t h i n}} \mid \boldsymbol {x} \in X _ {0}\right) = \bar {\alpha} _ {t}, \tag {4}
88
+ $$
89
+
90
+ where $\bar{\alpha}_t = \prod_{i=1}^n \alpha_i$ . Equivalently, the intensity of the thinned points at step $t$ is given by $\lambda_t^{\mathrm{thin}} = \bar{\alpha}_t \lambda_0$ and the number of remaining points follows a Binomial distribution: $Pr[n_t | X_0^{\mathrm{thin}}] = \mathrm{Binomial}(|X_0|, \bar{\alpha}_t)$ , where $n_t = |X_t^{\mathrm{thin}}|$ .
91
+
92
+ Noise Process: This process adds random points $X_T^\epsilon \sim p_{\mathrm{noise}}(X)$ sampled from a noise point process with intensity $\lambda^\epsilon$ . At step $t + 1$ , we express $X_{t + 1}^{\epsilon}|X_{t}^{\epsilon}$ as:
93
+
94
+ $$
95
+ X _ {t + 1} ^ {\epsilon} = X _ {t} ^ {\epsilon} \cup X _ {t + 1} ^ {\Delta \epsilon}, \quad \text {w h e r e} X _ {t + 1} ^ {\Delta \epsilon} \sim \beta_ {t + 1} \lambda^ {\epsilon}. \tag {5}
96
+ $$
97
+
98
+ By the superposition property, the intensity of $X_{t}^{\epsilon}$ is $\lambda_t^\epsilon = \bar{\beta}_t\lambda^\epsilon$ , where $\bar{\beta}_{t} = \sum_{i = 1}^{t}\beta_{i}$ and $\bar{\beta}_t\in [0,1]$ . Alternatively, we can view the noise process as a reversed thinning process: we sample $X_{T}^{\epsilon}\sim p_{\mathrm{noise}}(X)$ and thin it by $1 - \bar{\beta}_t$ to obtain $X_{t}^{\epsilon}$ . Given a noise sample $X_{T}^{\epsilon}$ , we then find that:
99
+
100
+ $$
101
+ q \left(\boldsymbol {x} \in X _ {t} ^ {\epsilon} \mid \boldsymbol {x} \in X _ {T} ^ {\epsilon}\right) = \bar {\beta} _ {t}. \tag {6}
102
+ $$
103
+
104
+ Notably, this process is independent of the random point set $X_0$ , i.e., $\forall t: q(X_t^\epsilon | X_0) = q(X_t^\epsilon)$ .
105
+
106
+ We present a visual depiction of the two forward processes in Figure 2. Finally, given that $\forall t:X_{t} = X_{t}^{\mathrm{thin}}\bigcup X_{t}^{\epsilon}$ it follows that for $\lim_{t\to T}\bar{\alpha}_t = 0$ and $\lim_{t\to T}\bar{\beta}_t = 1$ the stationary distribution is $q(X_{T}|X_{0}) = p_{\mathrm{noise}}(X)$ , which can be seen by applying the superposition property and finding the intensity of $X_{t}|X_{0}$ to be $\bar{\alpha}_{t}\lambda_{0} + \bar{\beta}_{t}\lambda^{\epsilon}$ . To summarize, the forward process gradually removes points from the original point set $X_0\sim p_{\mathrm{data}}(X)$ while progressively adding points of a noise point set $X_{T}\sim p_{\mathrm{noise}}(X)$ , stochastically interpolating between data and noise.
107
+
108
+ # 3.2 REVERSE PROCESS
109
+
110
+ To generate samples from our diffusion model, i.e., $X_{T}\rightarrow \dots \rightarrow X_{0}$ , we need to learn how to reverse the forward process by approximating the posterior $q(X_{t}|X_{0},X_{t + 1})$ with a model $p_{\theta}(X_t|X_{t + 1})$ . We will start by deriving the posterior $q(X_{t}|X_{0},X_{t + 1})$ from the forward process $q(X_{t + 1}|X_t)$ and then show how to parameterize and train $p_{\theta}(X_t|X_{t + 1})$ to approximate the posterior.
111
+
112
+ Since the forward process consists of two independent processes (thinning and noise) and noticing that $X_{t + 1}^{\mathrm{thin}} = X_0 \bigcap X_{t + 1}$ and $X_{t + 1}^{\epsilon} = X_{t + 1} \setminus X_0$ , the posterior can be derived in two parts:
113
+
114
+ Thinning posterior: Since all points in $X_{t+1}^{\mathrm{thin}}$ have been retained from $t = 0$ , it follows that $X_{t+1}^{\mathrm{thin}} \subseteq X_t^{\mathrm{thin}}$ . Then for each point in $\boldsymbol{x} \in X_0 \setminus X_{t+1}^{\mathrm{thin}}$ , we derive then posterior using Bayes' theorem, applying Equation 3, Equation 4 and the Markov property:
115
+
116
+ $$
117
+ \begin{array}{l} q (\boldsymbol {x} \in X _ {t} ^ {\text {t h i n}} | \boldsymbol {x} \notin X _ {t + 1} ^ {\text {t h i n}}, \boldsymbol {x} \in X _ {0}) = \frac {q (\boldsymbol {x} \notin X _ {t + 1} ^ {\text {t h i n}} | \boldsymbol {x} \in X _ {t} ^ {\text {t h i n}}) q (\boldsymbol {x} \in X _ {t} ^ {\text {t h i n}} | \boldsymbol {x} \in X _ {0})}{q (x \notin X _ {t + 1} ^ {\text {t h i n}} | \boldsymbol {x} \in X _ {0})} (7) \\ = \frac {\left(1 - \alpha_ {t + 1}\right) \bar {\alpha} _ {t}}{\left(1 - \bar {\alpha} _ {t + 1}\right)} = \frac {\bar {\alpha} _ {t} - \bar {\alpha} _ {t + 1}}{1 - \bar {\alpha} _ {t + 1}}. (8) \\ \end{array}
118
+ $$
119
+
120
+ Thus, we can sample $X_{t}^{\mathrm{thin}}$ by superposition of $X_{t + 1}^{\mathrm{thin}}$ and thinning $X_0\setminus X_{t + 1}^{\mathrm{thin}}$ with Equation 7.
121
+
122
+ ![](images/a864d9aa0d44784f14d8457c41401d4dfd32382414b5184aab3717e726283d67.jpg)
123
+ Figure 3: The posterior reverses the stochastic interpolation of $X_0 \rightarrow X_T$ of the forward process by adding back thinned points from the thinning process and thinning point added in the noise process.
124
+
125
+ Noise posterior: Following the reverse thinning interpretation of the noise process, each point in $X_{t}^{\epsilon}$ must have been in both $X_{t+1}^{\epsilon}$ and $X_{T}^{\epsilon}$ . Hence, we derive the posterior for each point in $X_{t+1}^{\epsilon}$ to still be in $X_{t}^{\epsilon}$ by following Equation 6, along with the fact that $X_{t}^{\epsilon}$ is independent from $X_{0}$ :
126
+
127
+ $$
128
+ \begin{array}{l} q \left(\boldsymbol {x} \in X _ {t} ^ {\epsilon} \mid \boldsymbol {x} \in X _ {t + 1} ^ {\epsilon}, \boldsymbol {x} \notin X _ {0}\right) = \frac {q \left(\boldsymbol {x} \in X _ {t + 1} ^ {\epsilon} \mid \boldsymbol {x} \in X _ {t} ^ {\epsilon}\right) q \left(\boldsymbol {x} \in X _ {t} ^ {\epsilon}\right)}{q \left(\boldsymbol {x} \in X _ {t + 1} ^ {\epsilon}\right)} (9) \\ = \frac {1 \cdot \bar {\beta} _ {t}}{(\bar {\beta} _ {t + 1})} = \frac {\bar {\beta} _ {t}}{\bar {\beta} _ {t + 1}}. (10) \\ \end{array}
129
+ $$
130
+
131
+ Thus, we can sample $X_{t}^{\epsilon}$ by thinning $X_{t + 1}^{\epsilon}$ with probability $(1 - \frac{\bar{\beta}_t}{\bar{\beta}_{t + 1}})$ .
132
+
133
+ Parametrization: Given $X_0$ and $X_T$ , the derived posterior can reverse the noising process to generate $X_0$ . However, to generate a new approximate sample $X_0 \sim p_{\mathrm{data}}(X)$ , we need to be able to sample from the posterior $q(X_t | X_0, X_{t+1})$ without knowing $X_0$ . For this reason we approximate the posterior with a model $p_\theta(X_t | X_{t+1})$ , where we choose $p_\theta(X_t | X_{t+1}) = \int q(X_t | \widetilde{X}_0, X_{t+1}) p_\theta(\widetilde{X}_0 | X_{t+1}) \, \mathrm{d}\widetilde{X}_0$ and training a neural network $p_\theta(\widetilde{X}_0 | X_t)$ to approximate $X_0 | X_{t+1}$ for each $t + 1$ .
134
+
135
+ To effectively train this model, we have to condition our model $p_{\theta}(\widetilde{X}_0|X_t)$ on $X_{t}$ . We propose to embed the points $\pmb{x} \in X_{t}$ permutation invariant with a Transformer encoder with full attention and apply a sinusoidal embedding to embed $n = |X_{t}|$ and $t$ . Then, to probabilistically predict $X_0|X_t$ , we make use of the following case distinction for $X_{t}^{\mathrm{thin}} = X_{0} \cap X_{t}$ and $X_0 \setminus X_t$ :
136
+
137
+ First, predicting the retained points in $X_{t}$ , i.e., the intersection of $X_{0}$ and $X_{t}$ , is a binary classification task for which we train a multi-layer-perceptron (MLP) $g_{\theta}(\boldsymbol{x} \in X_{t}^{\mathrm{thin}} | X_{t}, t)$ with binary cross entropy loss $\mathcal{L}_{\mathrm{BCE}}$ . Second, the thinned points in $X_{0}$ , i.e., $X_{0} \setminus X_{t}$ , is a point set $N$ , which can be represented by its counting measure, as a mixture of $n$ Dirac measures:
138
+
139
+ $$
140
+ N = \sum_ {i = 1} ^ {n} \delta_ {X _ {i}}. \tag {11}
141
+ $$
142
+
143
+ In A.2, we prove that any finite mixture of Dirac deltas, such as $N$ , can be approximated by an $L^2$ function in $L^2(D, \mu)$ for any metric space $D$ . In Euclidean spaces, we approximate the Dirac measure with a mixture of multivariate Gaussian distributions with diagonal covariance matrices. Note that the multivariate Gaussian density function is a standard approximation of the Dirac delta function and, as the determinant of a diagonal covariance matrix $\pmb{\Sigma} \coloneqq \sigma \pmb{I}$ approaches zero, the Gaussian increasingly resembles the Dirac delta (See Equation A.2). We parameterize the number of points to sample $n_\theta$ and the components of the mixture — weights $\pmb{w}_\theta$ , mean $\pmb{\mu}_\theta$ and diagonal covariance matrix $\pmb{\Sigma}_\theta$ — with an MLP $f_\theta$ and train it with the negative log likelihood $\mathcal{L}_{\mathrm{NLL}}$ .
144
+
145
+ Lastly, to ensure the expected number of points at any time $t$ throughout the diffusion process is constant, we use $\bar{\alpha}_t = 1 - \bar{\beta}_t$ and a noise process with a constant intensity such that $\int_A \lambda^\epsilon = E[N(A)]$ for the bounded Borel set $A$ that represents our domain.
146
+
147
+ # 3.3 SAMPLING PROCEDURE
148
+
149
+ Unconditional sampling: Starting from a sample $X_{T}$ of the noise distribution, we apply our POINT SET DIFFUSION model to sample a new $X_{0}$ over $T$ steps. We start by sampling $X_{T} \sim \lambda_{\epsilon}$ and
150
+
151
+ then for all $t \in (T, \dots, 1)$ sample $\widetilde{X}_0 \sim p_\theta(X_0|X_t)$ to subsequently apply the denoising posterior $q(X_{t-1}|\widetilde{X}_0, X_t)$ and attain $X_{t-1}$ . Finally, at step 1 we sample $\widetilde{X}_0 \sim p_\theta(X_0|X_1)$ . We present the extended sampling algorithm in Algorithm 2.
152
+
153
+ Conditional sampling: Let $C: D \to \{0,1\}$ be a conditioning mask on our metric space $D$ , where we define the masking of a subset $X \subseteq D$ as $C(X) := \{x \in X | C(x) = 1\}$ and its complement as $C'(X) := \{x \in X | C(x) = 0\}$ . Then, we can leverage our POINT SET DIFFUSION model to conditionally generate random point sets outside the conditioning mask by applying Algorithm 1:
154
+
155
+ ![](images/719be780ac6f08233118a492aadacb30ff11d7d58f3737e95e3a8ec66fc74c0e.jpg)
156
+
157
+ ![](images/df29baf43c46f37d8e301a894871f7b8b45eaec9841ad6898be0bb1998373b6e.jpg)
158
+ Figure 4: Examples of conditioning masks for $\mathbb{R}_{\geq 0}$ and $\mathbb{R}^2$
159
+
160
+ Algorithm 1 Conditional sampling
161
+ Require: $X_0^c = C(X_0)$
162
+ 1: $X_T \sim \lambda_\epsilon$
163
+ 2: for $t = T, \ldots, 1$ do
164
+ 3: $\widetilde{X}_0 \sim p_\theta(X_0|X_t)$
165
+ 4: $\widetilde{X}_{t-1} \sim q(X_{t-1}|\widetilde{X}_0, X_t)$ (reverse 3.2)
166
+ 5: $X_{t-1}^c \sim q(X_{t-1}^c|X_0^c)$ (forward 3.1)
167
+ 6: $X_{t-1} = C'(\widetilde{X}_{t-1}) \cup C(X_{t-1}^c)$
168
+ 7: end for
169
+ 8: return $C'(X_0)$
170
+
171
+ Thus, following this sampling procedure, we can generate conditional samples for any conditioning mask $C$ , where we represent some illustrative conditioning masks for bounded sets on $\mathbb{R}_{\geq 0}$ and $\mathbb{R}^2$ depicting temporal forecasting, history prediction and general imputation tasks in Figure 4.
172
+
173
+ # 4 RELATED WORK
174
+
175
+ Since large parts of the real-world can be effectively captured by Euclidean spaces, point processes have mainly been defined on spatial and temporal dimensions, represented by an Euclidean space. Hence, for this discussion of the related work, we will focus on unordered and ordered point processes on Euclidean spaces, mainly SPPs, TPPs and STPPs. For completeness, we want to mention traditional parametric point processes defined on manifolds, such as determinantal point processes (Berman, 2008; Katori & Shirai, 2022) and cluster point processes (Bogachev & Daletskii, 2013).
176
+
177
+ Unordered Point Processes (SPP): Modeling a permutation-invariant intensity for unordered point sets that captures complex interactions while remaining efficient for sampling is challenging (Daley & Vere-Jones, 2007), seemingly limiting the development of machine-learning-based models for SPPs. Classical models like the Poisson Point Process (Kingman, 1992) use either homogeneous or inhomogeneous intensity functions across space. More flexible models, such as Cox processes (Cox, 1955), and specifically the popular Log-Gaussian Cox Process (Jesper Møller, 1998), extend this by modeling the intensity function through a doubly stochastic process, allowing for flexible spatial inhomogeneity. A recent approach, the Regularized Method by Osama et al. (2019), parameterizes a spatial Poisson process on a hexagonal grid with splines, offering out-of-sample guarantees. However, these methods often rely on spatial discretization and simple parametric forms and some require separate intensity estimates for each point set, limiting their ability to capture the underlying distribution across different samples (Daley & Vere-Jones, 2007; Osama et al., 2019).
178
+
179
+ Ordered point processes (TPP and STPP): The causal ordering of time enables the parametrization of a conditional intensity, which classically is being modeled with parametric functions, where the Hawkes Process (Hawkes, 1971) is the most widely used model and captures point interaction patterns like self-excitation. Given the sequential nature of ordered point process, a variety of Machine Learning based approaches for TPPs and STPPs have been proposed (see Shchur et al. (2021) for a review on neural TPPs). Where recurrent neural network-(Du et al., 2016; Shchur et al., 2020a) and transformer-based encoders (Zhang et al., 2020a; Zuo et al., 2020; Chen et al., 2020) are leveraged to encode the history and neurally parameterized Hawkes (Zhou & Yu, 2023; Zhang
180
+
181
+ et al., 2020a; Zuo et al., 2020), parametric density functions (Du et al., 2016; Shchur et al., 2020a), mixtures of kernels (Okawa et al., 2019; Soen et al., 2021; Zhang et al., 2020b; Zhou et al., 2022), neural networks (Omi et al., 2019; Zhou & Yu, 2023), Gaussian diffusion (Lin et al., 2022; Yuan et al., 2023) and normalizing flows (Chen et al., 2020; Shchur et al., 2020b) have been proposed to (non)-parametrically decode the conditional density or intensity of the next event.
182
+
183
+ Differences to ADD-THIN (Ludke et al., 2023): Since our method is closely related to ADD-THIN, we want to highlight their key methodological differences. While ADD-THIN proposed to leverage the thinning and superposition properties to define a diffusion process for TPPs, POINT SET DIFFUSION generalizes this idea to define a diffusion-based latent variable model for point processes on general metric spaces. In doing so, we disentangle the superposition and thinning to attain two independent processes to allow for more explicit control and define the diffusion model independent of the intensity function as a stochastic interpolation of point sets. Furthermore, ADD-THIN has to be trained for specific conditioning tasks, while we show how to condition our unconditional POINT SET DIFFUSION model for arbitrary conditioning tasks on the metric space. Lastly, POINT SET DIFFUSION and its parametrization are agnostic to the ordering of points, making it applicable to model the general class of point processes on any metric space, including, for example, SPPs.
184
+
185
+ # 5 EXPERIMENTS
186
+
187
+ Although point processes are fundamentally generative models, the standard evaluation method relies on reporting the negative log-likelihood (NLL) on a hold-out test set, effectively reducing the evaluation to single-event predictions for STPPs and TPPS. However, this approach presents two key issues. First, computing the NLL depends on the specific implementation and parameterization of the (conditional) intensity function and is intractable for many models, necessitating approximations using Monte Carlo methods, numerical integration, or the evidence lower bound (ELBO). This complicates fair comparisons between models. Second, evaluating the likelihood conditioned on ground-truth history, does not necessarily reflect how well a model captures the data distribution or its ability to perform on complex conditional generation tasks (Shchur et al., 2021). To overcome these limitations, we evaluate the generative capabilities of our proposed POINT SET DIFFUSION model by benchmarking it on a range of unconditional and conditional generation tasks for SPP and STPP. Further, we compare our model's performance with the state-of-the-art TPP model ADD-THIN in A.6. Details of our model's training and the hyperparameters are in A.4, while all baselines are trained reproducing their reported NLL using their proposed hyperparameters and code.
188
+
189
+ # 5.1 DATA
190
+
191
+ We follow Chen et al. (2021) and evaluate our model on four benchmark datasets with their proposed pre-processing and splits: three real-world datasets — Japan Earthquakes (U.S. Geological Survey, 2024), New Jersey COVID-19 Cases (The New York Times, 2024), and Citibike Pickups (Citi Bike, 2024)—and one synthetic dataset, Pinwheel, based on a multivariate Hawkes process (Soni, 2019).
192
+
193
+ # 5.2 METRICS
194
+
195
+ To evaluate both unconditional and conditional tasks, we compute distances between point process distributions and individual point sets, assuming the space is normed, and all points are bounded, i.e., $\forall i, \boldsymbol{x}_i \in [-1, 1]^d$ . We use the following metrics in our evaluation:
196
+
197
+ Sequence Length (SL): To compare the length distribution of point sets, we report the Wasserstein distance between the two categorical distributions. For conditional tasks, we compare the length of the generated point set to the ground truth by reporting the Mean Absolute Error (MAE).
198
+
199
+ Counting Distance (CD): Xiao et al. (2017) introduced a Wasserstein distance for ordered TPPs based on Birkhoff's theorem. We generalize this counting distance to higher-dimensional ordered Euclidean spaces (e.g., STPPs) using the $L_{1}$ distance:
200
+
201
+ $$
202
+ C D (X, Y) = \frac {1}{d} \sum_ {i = 1} ^ {k} \left\| \boldsymbol {x} _ {i} - \boldsymbol {y} _ {i} \right\| _ {1} + \sum_ {j = k + 1} ^ {l} \left\| U - \boldsymbol {y} _ {j} \right\| _ {1}, \tag {12}
203
+ $$
204
+
205
+ Table 1: Density estimation results on the hold-out test set for SPPs, averaged over three random seeds (bold best and underline second best).
206
+
207
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td></tr><tr><td>LOG-GAUSSIAN COX</td><td>0.047</td><td>0.214</td><td>0.209</td><td>0.340</td><td>0.104</td><td>0.336</td><td>0.017</td><td>0.285</td></tr><tr><td>REGULARIZED METHOD</td><td>2.361</td><td>0.391</td><td>0.255</td><td>0.411</td><td>0.097</td><td>0.342</td><td>0.039</td><td>0.411</td></tr><tr><td>POINT SET DIFFUSION</td><td>0.038</td><td>0.173</td><td>0.199</td><td>0.268</td><td>0.056</td><td>0.092</td><td>0.017</td><td>0.099</td></tr></table>
208
+
209
+ Table 2: Conditional generation results on the hold-out test set for SPP, averaged over three random seeds (bold best).
210
+
211
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>MAE(↓)</td><td>WD(↓)</td><td>MAE(↓)</td><td>WD(↓)</td><td>MAE(↓)</td><td>WD(↓)</td><td>MAE(↓)</td><td>WD(↓)</td></tr><tr><td>REGULARIZED METHOD</td><td>30.419</td><td>0.162</td><td>16.075</td><td>0.148</td><td>7.740</td><td>0.115</td><td>3.547</td><td>0.150</td></tr><tr><td>POINT SET DIFFUSION</td><td>4.651</td><td>0.106</td><td>5.056</td><td>0.119</td><td>3.498</td><td>0.085</td><td>2.256</td><td>0.122</td></tr></table>
212
+
213
+ where $X = \{\pmb{x}_i\}_{i=1}^k$ and $Y = \{\pmb{y}_i\}_{i=1}^l$ are two ordered samples from a point process on a metric space of dimensionality $d$ , i.e. $D \subseteq \mathbb{R}^d$ . Further, $U := (\pmb{u}_1, \dots, \pmb{u}_d)$ represents the upper bounds of the metric space $D$ along each dimension and we assume, without loss of generality, $l \geq k$ .
214
+
215
+ Wasserstein Distance (WD): An instance of a Point Process is itself a stochastic process of points in space. Hence, we can compute a distance between two point sets based on the Wasserstein distance on the metric space $D \subseteq \mathbb{R}^d$ between the two sets of points.
216
+
217
+ Maximum Mean Discrepancy (MMD) (Gretton et al., 2012): The kernel-based statistic test compares two distributions based on a distance metric; we use the WD for SPPs and CD for STPP.
218
+
219
+ # 5.3 SPATIAL POINT PROCESSES
220
+
221
+ We evaluate our model's ability to capture the distribution of spatial point processes (SPP) by benchmarking it against two methods. The first is the widely used LOG-GAUSSIAN COX PROCESS (Jesper Møller, 1998), a doubly stochastic model that parameterizes the intensity function using a Gaussian process. The second is the REGULARIZED METHOD (Osama et al., 2019), leveraging a regularized criterion to infer predictive intensity intervals, offering out-of-sample prediction guarantees and enabling conditional generation.
222
+
223
+ Unconditional Generation (Density Estimation): In this experiment, we generate 1,000 unconditional samples from each model and compare their distribution to a hold-out test set using the WD-SL and WD-MMD metrics. As shown in Table 1, our POINT SET DIFFUSION model consistently generates samples most closely matching the data distribution across all datasets. While the baseline models perform reasonably well in capturing the count distributions for most datasets, their reliance on spatial discretization and smoothness properties of the intensity function limit their ability to capture the complex spatial patterns in the data, as reflected by higher WD-MMD scores.
224
+
225
+ Conditional Generation: To assess POINT SET DIFFUSION's ability to solve spatial conditioning tasks, we sample 50 random bounding boxes (with widths uniformly sampled between $1/8$ and $3/8$ of the metric space) for imputation on the hold-out test set, and report the results in Table 2. The REGULARIZED METHOD fits a spatial Pois
226
+
227
+ son model with out-of-sample accuracy guarantees and has been shown by Osama et al. (2019) to outperform the LOG-GAUSSIAN COX PROCESS on interpolation and extrapolation tasks. However, we find that the REGULARIZED METHOD's reliance on predicting a smooth and discretized intensity function conditioned on neighboring areas leads to inaccurate imputations when the adjacent regions contain significantly different numbers of points (see the hexagonal discretization structure
228
+
229
+ ![](images/d2d7f8c68cf55e9e4bd720162050770ca02ea9e18e3c9e08a4a6b68eee06f76c.jpg)
230
+ Figure 5: SPP conditioning task: top ground truth, middle REGULARIZED METHOD and bottom POINT SET DIFFUSION.
231
+
232
+ Table 3: Density estimation results on the hold-out test set for STPP, averaged over three random seeds (bold best and underline second best).
233
+
234
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td></tr><tr><td>DEEPSTPP</td><td>0.105</td><td>0.266</td><td>0.169</td><td>0.166</td><td>3.257</td><td>0.677</td><td>1.067</td><td>0.197</td></tr><tr><td>DIFFSTPP</td><td>0.088</td><td>0.064</td><td>0.332</td><td>0.146</td><td>0.560</td><td>0.611</td><td>0.196</td><td>0.055</td></tr><tr><td>AUTOSTPP</td><td>0.073</td><td>0.062</td><td>0.364</td><td>0.280</td><td>0.598</td><td>0.331</td><td>0.127</td><td>0.147</td></tr><tr><td>POINT SET DIFFUSION</td><td>0.042</td><td>0.023</td><td>0.189</td><td>0.043</td><td>0.032</td><td>0.020</td><td>0.023</td><td>0.020</td></tr></table>
235
+
236
+ Table 4: Forecasting results on the hold-out test set for STPP, averaged over three random seeds (bold best and underline second best).
237
+
238
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>MAE(↓)</td><td>CD(↓)</td><td>MAE(↓)</td><td>CD(↓)</td><td>MAE(↓)</td><td>CD(↓)</td><td>MAE(↓)</td><td>CD(↓)</td></tr><tr><td>DEEPSTPP</td><td>10.154</td><td>11.211</td><td>6.264</td><td>8.492</td><td>127.968</td><td>125.747</td><td>18.651</td><td>15.792</td></tr><tr><td>DIFFSTPP</td><td>16.027</td><td>17.466</td><td>18.822</td><td>14.302</td><td>7.516</td><td>8.460</td><td>14.461</td><td>13.062</td></tr><tr><td>POINT SET DIFFUSION</td><td>7.407</td><td>10.458</td><td>7.293</td><td>10.865</td><td>5.928</td><td>7.225</td><td>6.341</td><td>6.437</td></tr></table>
239
+
240
+ and smoothness in Figure 5). This issue is exacerbated by not capturing a shared intensity function across point sets, making it difficult for the REGULARIZED METHOD to handle non-smooth spatial patterns, such as varying inhomogeneous intensities shared across multiple point sets. This highlights a core limitation of SPP models that rely on instance-specific intensity functions.
241
+
242
+ # 5.4 SPATIO-TEMPORAL POINT PROCESSES
243
+
244
+ For STPPs, we compare our model to three state-of-the-art STPP models, which parameterize an autoregressive intensity function, to assess the model's ability to capture the point process distribution.
245
+
246
+ DEEPSTPP (Zhou et al., 2022) uses a latent variable framework to non-parametrically model the conditional intensity based on kernels. DIFFSTPP (Yuan et al., 2023) is based on a diffusion model approximating the conditional intensity. Lastly, AUTOSTPP (Zhou & Yu, 2023) uses the automatic integration for neural point processes, presented by Lindell et al. (2021), to parameterize a generalized spatiotemporal Hawkes model.
247
+
248
+ Sampling Runtime: We report the median sampling runtime over ten runs generating ten point sets of length $n$ on an NVIDIA A100-PCIE-40GB for all STPP models in Figure 6. POINT SET DIFFUSION maintains a nearly con
249
+
250
+ ![](images/c204156c9a086de38b2fa0b820ebb95dbb9892e27b74268f2bf4b0da4b98a2af.jpg)
251
+ Figure 6: STPP runtime for sampling $n$ points.
252
+
253
+ stant runtime for all point set lengths as it generates all points in parallel, whereas autoregressive baselines, due to their sequential sampling, exhibit a linear relationship between runtime and $n$ .
254
+
255
+ Unconditional Generation (Density Estimation): We evaluate the performance of each model by comparing the WD-SL and CS-MMD between the hold-out test set and 1,000 samples generated by the trained models, as shown in Table 3. Again, the POINT SET DIFFUSION model best captures the distribution of the point process distribution for all datasets. The autoregressive intensity functions of the baseline models fail to generate point sets that align closely with the data distribution for most datasets, as reflected in the differences in the WD-SL and CD-MMD metrics compared to POINT SET DIFFUSION. While these baselines are trained to predict the next event given a history window, they struggle to unconditionally sample realistic point sets when starting from an empty sequence. Consequently, this highlights our argument that the standard evaluation based on NLL is insufficient to assess the true generative capacity of point process models.
256
+
257
+ Conditional Generation (Forecasting): Forecasting future events based on historical data is a challenging and a fundamental task for STPP models. To evaluate this capability, we uniformly sampled 50 random starting times from the interval $\left[\frac{5}{8} U_{\mathrm{time}}, \frac{7}{8} U_{\mathrm{time}}\right]$ , where $U_{\mathrm{time}}$ is the maximum time, for
258
+
259
+ ![](images/7e78a93f3018d63149f389c13bebd34d0eb89411152f256785b052f9a4d78eae.jpg)
260
+
261
+ ![](images/487606c2c1ff2101190f2399e668a52f69ce19c3c34d630f4f07a6098688f0bc.jpg)
262
+
263
+ ![](images/30a20603bbd23c11619d0cddb0e7220ef36e377b594a032a1f56e3653631109c.jpg)
264
+
265
+ ![](images/d0399fde5e92620d77e6ef8448f87bf58719b9dfdc7404292f38126fcbe5b594.jpg)
266
+ Figure 7: Complex spatial conditioning tasks solved with POINT SET DIFFUSION: Top condition and ground truth data, bottom density plots for predictions.
267
+
268
+ ![](images/12b97689e54a471cabe5837eb6ffc9a096c2b49179531bbb4af7a613cb438c92.jpg)
269
+
270
+ ![](images/ba3eddabc252a2256b8d9aeb90a8633fbdaf52369df3aac2f91df07508f4f610.jpg)
271
+
272
+ each point set in the hold-out test set. The results are detailed in Table 4. $^{2}$ The autoregressive baselines, trained to predict the next event based on history, achieve good forecasting results for most datasets, one even surpassing POINT SET DIFFUSION on the Covid NJ dataset. Still, our unconditional model outperforms the autoregressive baselines across all other datasets.
273
+
274
+ # 5.5 OTHER CONDITIONING TASK
275
+
276
+ Since the STPP baselines are autoregressive models, they are limited to forecasting tasks. However, our model can generate conditional samples for any conditioning mask $C$ on our metric space. To showcase this feature, we present a few visual examples of complex conditioning tasks in Figure 7.
277
+
278
+ # 6 CONCLUSION
279
+
280
+ To model general point processes on metric spaces, we present POINT SET DIFFUSION, a novel diffusion-based latent variable model. We derive POINT SET DIFFUSION as a stochastic interpolation between data point sets and noise point sets governed by the thinning and superposition properties of random point sets. Thereby, we attain a very flexible, unconditional Point Process model that can be conditioned for arbitrary condition masks on the metric space and allows for efficient and parallel sampling of entire point sets without relying on the (conditional) intensity function. In conditional and unconditional experiments on synthetic and real-world SPP, TPP and STPP data, we demonstrate that POINT SET DIFFUSION achieves state-of-the-art performance while allowing for up to orders of magnitude faster sampling.
281
+
282
+ We have introduced a generative model for point processes on general metric spaces, prioritizing generality, scalability, and flexibility to address key limitations of intensity-based models. While this enables unconditional modeling and flexible generation for arbitrary conditional tasks on any metric space, it does not permit interpreting the conditional intensity or its parameters. Thus, for inference applications of STPPs or TPPs that require estimating the conditional intensity of the next event, point process models that directly approximate this conditional intensity are better suited. Ultimately, with POINT SET DIFFUSION, we have presented a novel set modeling approach and would be interested to see how future work explores its limitations on other (high-dimensional) metric (e.g., Riemannian manifolds), topological and discrete spaces with potential applications extending beyond traditional point sets including but not limited to natural language and graphs.
283
+
284
+ # ACKNOWLEDGEMENT
285
+
286
+ This research was supported by the German Research Foundation, grant GU 1409/3-1.
287
+
288
+ # REFERENCES
289
+
290
+ Robert J Berman. Determinantal point processes and fermions on complex manifolds: large deviations and bosonization. arXiv preprint arXiv:0812.4224, 2008.
291
+ Leonid Bogachev and Alexei Daletskii. Cluster point processes on manifolds. Journal of Geometry and Physics, 63:45-79, 2013.
292
+ Ricky T. Q. Chen, Brandon Amos, and Maximilian Nickel. Neural Spatio-Temporal Point Processes. In International Conference on Learning Representations, 2021.
293
+ Ricky TQ Chen, Brandon Amos, and Maximilian Nickel. Neural spatio-temporal point processes. arXiv preprint arXiv:2011.04583, 2020.
294
+ Citi Bike. System Data - Citi Bike Trip Histories, 2024. URL https://citibikenyc.com/system-data. Accessed: 2024-09-25.
295
+ David R Cox. Some statistical methods connected with series of events. Journal of the Royal Statistical Society: Series B (Methodological), 17(2):129-157, 1955.
296
+ Daryl J Daley and David Vere-Jones. An introduction to the theory of point processes: volume II: general theory and structure. Springer Science & Business Media, 2007.
297
+ Daryl J Daley, David Vere-Jones, et al. An introduction to the theory of point processes: volume I: elementary theory and methods. Springer, 2003.
298
+ Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1555-1564, 2016.
299
+ Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723-773, 2012.
300
+ Alan G Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83-90, 1971.
301
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.
302
+ Chengkuan Hong and Christian Shelton. Deep neyman-scott processes. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, volume 151, pp. 3627-3646. PMLR, 2022.
303
+ Rasmus Plenge Waagepetersen Jesper Møller, Anne Randi Syversveen. Log Gaussian Cox Processes. Scandinavian Journal of Statistics, 25:451-482, 1998.
304
+ Makoto Katori and Tomoyuki Shirai. Local universality of determinantal point processes on riemanian manifolds. 2022.
305
+ John Frank Charles Kingman. Poisson processes, volume 3. Clarendon Press, 1992.
306
+ Haitao Lin, Lirong Wu, Guojiang Zhao, Liu Pai, and Stan Z Li. Exploring generative neural temporal point process. Transactions on Machine Learning Research, 2022.
307
+ David B Lindell, Julien NP Martel, and Gordon Wetzstein. Autoint: Automatic integration for fast neural volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14556-14565, 2021.
308
+
309
+ David Lüdke, Marin Biloš, Oleksandr Shchur, Marten Lienen, and Stephan Gunnemann. Add and thin: Diffusion for temporal point processes. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=tn9D1dam9L.
310
+ Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
311
+ Yoshihiko Ogata. Space-time point-process models for earthquake occurrences. Annals of the Institute of Statistical Mathematics, 50:379-402, 1998.
312
+ Maya Okawa, Tomoharu Iwata, Takeshi Kurashima, Yusuke Tanaka, Hiroyuki Toda, and Naonori Ueda. Deep mixture point processes: Spatio-temporal event prediction with rich contextual information. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 373-383, 2019.
313
+ Takahiro Omi, Kazuyuki Aihara, et al. Fully neural network based model for general temporal point processes. Advances in neural information processing systems, 32, 2019.
314
+ Muhammad Osama, Dave Zachariah, and Peter Stoica. Prediction of spatial point processes: regularized method with out-of-sample guarantees. Advances in Neural Information Processing Systems, 32, 2019.
315
+ Tohru Ozaki. Maximum likelihood estimation of hawkes' self-exciting point processes. Annals of the Institute of Statistical Mathematics, 31:145-155, 1979.
316
+ Oleksandr Shchur, Marin Biloš, and Stephan Gunnemann. Intensity-free learning of temporal point processes. In International Conference on Learning Representations (ICLR), 2020a.
317
+ Oleksandr Shchur, Nicholas Gao, Marin Bilos, and Stephan Gunnemann. Fast and flexible temporal point processes with triangular maps. In Advances in Neural Information Processing Systems (NeurIPS), 2020b.
318
+ Oleksandr Shchur, Ali Caner Türkmen, Tim Januschowski, and Stephan Gunnemann. Neural temporal point processes: A review. arXiv preprint arXiv:2104.03528, 2021.
319
+ Alexander Soen, Alexander Mathews, Daniel Grixti-Cheng, and Lexing Xie. Unipoint: Universally approximating point processes intensities. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 9685-9694, 2021.
320
+ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015.
321
+ Sandeep Soni. MHP: Generation and MLE Estimation for Multivariate Hawkes Process, 2019. URL https://github.com/sandeepsoni/MHP. Accessed: 2024-09-25.
322
+ The New York Times. Coronavirus (Covid-19) Data in the United States, 2024. URL https://github.com/nytimes/covid-19-data. Accessed: 2024-09-25.
323
+ U.S. Geological Survey, 2024. URL https://earthquake.usgs.gov/earthquakes/search/. Accessed: 2024-09-25.
324
+ Shuai Xiao, Mehrdad Farajtabar, Xiaojing Ye, Junchi Yan, Le Song, and Hongyuan Zha. Wasserstein learning of deep generative point process models. Advances in Neural Information Processing Systems, 30, 2017.
325
+ Yuan Yuan, Jingtao Ding, Chenyang Shao, Depeng Jin, and Yong Li. Spatio-temporal Diffusion Point Processes. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3173-3184, New York, NY, USA, 2023. Association for Computing Machinery.
326
+ Qiang Zhang, Aldo Lipani, Omer Kirnap, and Emine Yilmaz. Self-attentive hawkes process. In International conference on machine learning, pp. 11183-11193. PMLR, 2020a.
327
+
328
+ Wei Zhang, Thomas Panum, Somesh Jha, Prasad Chalasani, and David Page. Cause: Learning granger causality from event sequences using attribution methods. In International Conference on Machine Learning, pp. 11235-11245. PMLR, 2020b.
329
+ Zihao Zhou and Rose Yu. Automatic Integration for Spatiotemporal Neural Point Processes. In Advances in Neural Information Processing Systems, volume 36, pp. 50237-50253, 2023.
330
+ Zihao Zhou, Xingyi Yang, Ryan Rossi, Handong Zhao, and Rose Yu. Neural Point Process for Learning Spatiotemporal Event Dynamics. In Learning for Dynamics and Control Conference, pp. 777-789, 2022.
331
+ Simiao Zuo, Haoming Jiang, Zichong Li, Tuo Zhao, and Hongyuan Zha. Transformer hawkes process. arXiv preprint arXiv:2002.09291, 2020.
332
+
333
+ # A APPENDIX
334
+
335
+ # A.1 POINT PROCESS PROPERTIES
336
+
337
+ The thinning and superposition properties have been proved by other works for different versions of point processes. For completeness and generality, we prove them for a general Borel set $A$ . To apply these proofs for SPPs consider $A \subseteq S$ , where $S$ is a metric space in $\mathbb{R}^d$ and for STPPs consider $A \subseteq [0,T] \times S$ , where $T > 0$ .
338
+
339
+ Superposition: Proof. It is straightforward to obtain the superposition expectation measure from Equation 1:
340
+
341
+ $$
342
+ \mu (A) = \mathbb {E} [ N (A) ] = \mathbb {E} [ N _ {1} (A) + N _ {2} (A) ] = \mathbb {E} [ N _ {1} (A) ] + \mathbb {E} [ N _ {2} (A) ] = \mu_ {1} (A) + \mu_ {2} (A). \tag {13}
343
+ $$
344
+
345
+ Then, every point process has an intensity of $\lambda_{1}$ and $\lambda_{2}$ for each of the expectation measures $\mu_{1}$ and $\mu_{2}$ , respectively. Therefore, taking the right-hand side of Equation 1, we obtain the following intensity function for the superposition of point processes:
346
+
347
+ $$
348
+ \mu (A) = \mu_ {1} (A) + \mu_ {2} (A) = \int_ {A} \lambda_ {1} (x) d x + \int_ {A} \lambda_ {2} (x) d x = \int_ {A} \lambda_ {1} (x) + \lambda_ {2} (x) d x. \tag {14}
349
+ $$
350
+
351
+ This states that the density function for expectation measure $\mu$ is $\lambda \coloneqq \lambda_1 + \lambda_2$ , and concludes the proof for the superposition property of intensities for point processes.
352
+
353
+ **Thinning:** Proof. For this property, we need to assume that the singletons are simple, so we can only have one point at each position: $N(\{x\}) \leq 1$ ; these point processes are called simple. Simple point processes can be represented as a sum of Dirac measures at the random points $X_{i} \in S$ :
354
+
355
+ $$
356
+ N = \sum_ {i} \delta_ {X _ {i}}. \tag {15}
357
+ $$
358
+
359
+ The previous assumption on singletons makes the sum above a finite sum. If $Z_{i} \in \{0,1\}$ are Bernoulli random variables with a success probability $p$ we can define a random thinning process as the superposition of the following point processes:
360
+
361
+ $$
362
+ N _ {1} = \sum_ {i} Z _ {i} \delta_ {X _ {i}}. \tag {16}
363
+ $$
364
+
365
+ $$
366
+ N _ {2} = \sum_ {i} (1 - Z _ {i}) \delta_ {X _ {i}}. \tag {17}
367
+ $$
368
+
369
+ Since there are only two options for the Bernoulli random variable, it holds that the superposition of the point processes defined in Equations 16 and 17 are equivalent to the original point process, i.e., $N = N_{1} + N_{2}$ .
370
+
371
+ Given that all $Z_{i} \sim Bern(p)$ are i.i.d., we obtain a conditional probability distribution on the thinned point process $N_{1}(A)|N(A) = n \sim Binom(n,p)$ . And by the law of total expectation, we derive:
372
+
373
+ $$
374
+ \mu_ {1} (A) = \mathbb {E} [ N _ {1} (A) ] = \mathbb {E} \left[ \mathbb {E} [ N _ {1} (A) | N (A) ] \right] = \mathbb {E} [ N (A) p ] = \mu (A) p. \tag {18}
375
+ $$
376
+
377
+ We can write the terms of the equation before in terms of the intensity measure of the point process:
378
+
379
+ $$
380
+ \mu_ {1} (A) = p \cdot \mu (A) = p \int_ {A} \lambda (x) d x = \int_ {A} p \lambda (x) d x. \tag {19}
381
+ $$
382
+
383
+ Hence, the equation above implies that the intensity of the new point process $N_{1}$ , which keeps the points of the original point process $N$ with probability $p$ , is $p\lambda$ .
384
+
385
+ By the property of superposition, since $N = N_{1} + N_{2}$ , then $\lambda = p\lambda + (1 - p)\lambda$ . Therefore, the intensity of the point process $N_{2}$ , containing the thinned points, is $(1 - p)\lambda$ .
386
+
387
+ This proves that, in the opposite case, when removing points with probability $p$ from a given point process with intensity $\lambda$ , the intensity of the point process with the points kept after thinning is $(1 - p)\lambda$ .
388
+
389
+ # A.2 APPROXIMATION OF MIXTURE OF DIRAC DELTA FUNCTIONS BY $L^2$ -FUNCTIONS
390
+
391
+ Definition 1 (Dirac delta function) Let $(D,d,\mu)$ be a general metric space equipped with a measure $\mu$ . A Dirac delta function $\delta_{\pmb{x}}$ at a point $\pmb{x}\in D$ is defined as a distribution such that for any test function $f$ :
392
+
393
+ $$
394
+ \int_ {D} f (\boldsymbol {y}) \delta_ {\boldsymbol {x}} (\boldsymbol {y}) d \mu (\boldsymbol {y}) = f (\boldsymbol {x}). \tag {20}
395
+ $$
396
+
397
+ Theorem 1 Let $f_{M}(\pmb{y})$ be a finite mixture of Dirac deltas:
398
+
399
+ $$
400
+ f _ {M} (\boldsymbol {y}) = \sum_ {i = 1} ^ {n} w _ {i} \delta_ {\boldsymbol {x} _ {i}} (\boldsymbol {y}), \tag {21}
401
+ $$
402
+
403
+ where $\pmb{x}_1, \dots, \pmb{x}_n \in D$ are points in the metric space, and $w_i \in \mathbb{R}$ are weights associated with each Dirac delta function. Then, this finite mixture of Dirac deltas $f_M$ can be approximated by $L^2$ functions in $L^2(D, \mu)$ .
404
+
405
+ Proof. We use a sequence of smooth functions that approximate each Dirac delta in the mixture and then show that this approximation converges in the $L^2$ -norm.
406
+
407
+ Firstly, we show how to approximate Dirac delta functions. Let us consider a family of smooth functions $\phi_{\epsilon}(\pmb{x})$ (such as bump functions or mollifiers) that approximate the Dirac delta function $\delta_{\pmb{x}}$ as $\epsilon \rightarrow 0$ . These functions $\phi_{\epsilon}(\pmb{x} - \pmb{x}_i)$ are supported near $\pmb{x}_i$ and satisfy:
408
+
409
+ $$
410
+ \lim _ {\epsilon \rightarrow 0} \phi_ {\epsilon} (\boldsymbol {x} - \boldsymbol {x} _ {i}) = \delta_ {\boldsymbol {x} _ {i}} (\boldsymbol {x}). \tag {22}
411
+ $$
412
+
413
+ In particular, for any test function $f$ , we have:
414
+
415
+ $$
416
+ \int_ {D} f (\boldsymbol {y}) \phi_ {\epsilon} (\boldsymbol {y} - \boldsymbol {x} _ {i}) d \mu (\boldsymbol {y}) \rightarrow f (\boldsymbol {x} _ {i}) \quad \text {a s} \quad \epsilon \rightarrow 0. \tag {23}
417
+ $$
418
+
419
+ Hence, $\phi_{\epsilon}(\pmb{x} - \pmb{x}_i)$ has a similar property as the one of Dirac deltas given in Equation 20 and serves as an approximation of the Dirac delta $\delta_{\pmb{x}_i}(\pmb{x})$ for a small $\epsilon$ .
420
+
421
+ Secondly, we approximate the mixture of Dirac deltas $f_{M}$ by a function in $L^2 (D,\mu)$ using the same $\phi_{\epsilon}(\pmb {x})$ -based approximation for each Dirac delta, defining:
422
+
423
+ $$
424
+ f _ {\epsilon} (\boldsymbol {y}) = \sum_ {i = 1} ^ {n} w _ {i} \phi_ {\epsilon} (\boldsymbol {y} - \boldsymbol {x} _ {i}). \tag {24}
425
+ $$
426
+
427
+ Each term $\phi_{\epsilon}(\pmb{y} - \pmb{x}_i)$ is a smooth approximation of the corresponding Dirac delta $\delta_{\pmb{x}_i}(\pmb{y})$ , and the sum represents the approximation of the entire mixture of Diracs.
428
+
429
+ Thirdly, we show that the sequence $f_{\epsilon}$ converges to $f_{M}$ in the $L^2$ -norm, i.e., that:
430
+
431
+ $$
432
+ \lim _ {\epsilon \rightarrow 0} \| f _ {\epsilon} - f _ {M} \| _ {L ^ {2} (D, \mu)} = 0. \tag {25}
433
+ $$
434
+
435
+ Since $f_{M}$ is a sum of Dirac deltas, it is not directly in $L^{2}(D,\mu)$ , but its approximation $f_{\epsilon}$ is because each $\phi_{\epsilon}$ is a smooth function and smooth functions with compact support are in $L^{2}(D,\mu)$ .
436
+
437
+ We compute now the squared $L^2$ norm of the difference in Equation 25:
438
+
439
+ $$
440
+ \left\| f _ {\epsilon} - f _ {M} \right\| _ {L ^ {2} (D, \mu)} ^ {2} = \int_ {D} \left| f _ {\epsilon} (\boldsymbol {y}) - f _ {M} (\boldsymbol {y}) \right| ^ {2} d \mu (\boldsymbol {y}). \tag {26}
441
+ $$
442
+
443
+ Note that the squared difference of $f_{\epsilon}$ and $f_{M}$ in the above equation will have quadratic and crossed terms. However, we can neglect the crossed terms: $2\sum_{i < j}w_{i}w_{j}\int_{D}\left(\phi_{\epsilon}(\boldsymbol{y} - \boldsymbol{x}_{i}) - \delta_{\boldsymbol{x}_{i}}(\boldsymbol{y})\right)\left(\phi_{\epsilon}(\boldsymbol{y} - \boldsymbol{x}_{j}) - \delta_{\boldsymbol{x}_{j}}(\boldsymbol{y})\right)d\mu (\boldsymbol{y})$ , since every smooth function $\phi_{\epsilon}(\boldsymbol{y} - \boldsymbol{x}_i)$ is concentrated near $\boldsymbol{x}_i$ and terms involving different indices do not contribute to the limit.
444
+
445
+ Hence, we can simplify the norm in Equation 26 into the sum of the individual terms:
446
+
447
+ $$
448
+ \left\| f _ {\epsilon} - f _ {M} \right\| _ {L ^ {2} (D, \mu)} ^ {2} = \sum_ {i = 1} ^ {n} \int_ {D} w _ {i} ^ {2} \left| \left(\phi_ {\epsilon} (\boldsymbol {y} - \boldsymbol {x} _ {i}) - \delta_ {\boldsymbol {x} _ {i}} (\boldsymbol {y})\right) \right| ^ {2} d \mu (\boldsymbol {y}). \tag {27}
449
+ $$
450
+
451
+ For every $i$ , the term $\int_{D}|\phi_{\epsilon}(\pmb{y} - \pmb{x}_i) - \delta_{\pmb{x}_i}(\pmb{y})|^2 d\mu (\pmb{y})$ becomes small as $\epsilon \to 0$ , because by construction $\phi_{\epsilon}(\pmb{y} - \pmb{x}_i)\rightarrow \delta_{\pmb{x}_i}(\pmb{y})$ in the sense of distributions. Thus, by the properties of $\phi_{\epsilon}$ , we conclude that:
452
+
453
+ $$
454
+ \lim _ {\epsilon \rightarrow 0} \| f _ {\epsilon} - f _ {M} \| _ {L ^ {2} (D, \mu)} = 0. \tag {28}
455
+ $$
456
+
457
+ ![](images/1c3727dd76554773e7167e98a970ee4d38e2dddf546b392b5250db377a28ae47.jpg)
458
+
459
+ Lemma 1 Let $p(\boldsymbol{x}; \boldsymbol{\mu}, \boldsymbol{\Sigma})$ be the probability density function (PDF) of a multivariate Gaussian distribution. Then $p \in L^2(\mathbb{R}^d)$ .
460
+
461
+ Proof. The PDF of a multivariate Gaussian distribution in $\mathbb{R}^d$ with mean vector $\pmb{\mu} \in \mathbb{R}^d$ and covariance matrix $\pmb{\Sigma}$ (which is positive definite) is given by:
462
+
463
+ $$
464
+ p (\boldsymbol {x}; \boldsymbol {\mu}, \boldsymbol {\Sigma}) = \frac {1}{(2 \pi) ^ {n / 2} | \boldsymbol {\Sigma} | ^ {1 / 2}} \exp \left(- \frac {1}{2} (\boldsymbol {x} - \boldsymbol {\mu}) ^ {T} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu})\right), \tag {29}
465
+ $$
466
+
467
+ where $\boldsymbol{x} \in \mathbb{R}^d$ , $|\boldsymbol{\Sigma}|$ is the determinant of the covariance matrix $\boldsymbol{\Sigma}$ , and $\boldsymbol{\Sigma}^{-1}$ is the inverse of the covariance matrix. We show that $\|p\|_{L^2} = \left(\int_{\mathbb{R}^d} |p(\boldsymbol{x})|^2 d\boldsymbol{x}\right)^{1/2}$ is finite.
468
+
469
+ We need to compute the following integral:
470
+
471
+ $$
472
+ \int_ {\mathbb {R} ^ {d}} p (\boldsymbol {x}) ^ {2} d \boldsymbol {x} = \frac {1}{(2 \pi) ^ {n} | \boldsymbol {\Sigma} |} \int_ {\mathbb {R} ^ {d}} \exp \left(- \left(\boldsymbol {x} - \boldsymbol {\mu}\right) ^ {T} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu})\right) d \boldsymbol {x}. \tag {30}
473
+ $$
474
+
475
+ To simplify the calculation, we perform a change of variables: $\pmb{y} = \pmb{\Sigma}^{-1/2}(\pmb{x} - \pmb{\mu})$ . Under this transformation: $(\pmb{x} - \pmb{\mu})^T \pmb{\Sigma}^{-1}(\pmb{x} - \pmb{\mu}) = \pmb{y}^T \pmb{y} = \| \pmb{y} \|^2$ , and the differential $d\pmb{x}$ transforms as: $d\pmb{x} = |\pmb{\Sigma}^{1/2}| d\pmb{y} = |\pmb{\Sigma}|^{1/2} d\pmb{y}$ . Substituting these into the integral, we get:
476
+
477
+ $$
478
+ \int_ {\mathbb {R} ^ {d}} \exp \left(- \left(\boldsymbol {x} - \boldsymbol {\mu}\right) ^ {T} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu})\right) d \boldsymbol {x} = | \boldsymbol {\Sigma} | ^ {1 / 2} \int_ {\mathbb {R} ^ {d}} \exp \left(- \| \boldsymbol {y} \| ^ {2}\right) d \boldsymbol {y} = \pi^ {n / 2}, \tag {31}
479
+ $$
480
+
481
+ since the remaining integral is a standard Gaussian integral. Thus, the $L^2$ -norm integral becomes:
482
+
483
+ $$
484
+ \int_ {\mathbb {R} ^ {d}} p (\boldsymbol {x}) ^ {2} d \boldsymbol {x} = \frac {1}{(2 \pi) ^ {n} | \boldsymbol {\Sigma} |} | \boldsymbol {\Sigma} | ^ {1 / 2} \pi^ {n / 2} = \frac {1}{2 ^ {n} \pi^ {n / 2} | \boldsymbol {\Sigma} | ^ {1 / 2}}. \tag {32}
485
+ $$
486
+
487
+ Since the integral is a finite constant, we conclude that the PDF belongs to $L^2 (\mathbb{R}^d)$ .
488
+
489
+ ![](images/244aadd5411499f2ee6350176b6f6e1fe04b0ec9511d1c5ae6e4ffcb43ee3350.jpg)
490
+
491
+ Corollary 1 Given an Euclidean space $D \subseteq \mathbb{R}^d$ , a finite sum of Dirac deltas can be approximated with a mixture of multivariate Gaussian distributions:
492
+
493
+ $$
494
+ p _ {M} (\boldsymbol {x}) = \sum_ {i = 1} ^ {n} w _ {i} \cdot \mathcal {N} (\boldsymbol {x}; \boldsymbol {\mu} _ {i}, \boldsymbol {\Sigma} _ {i}). \tag {33}
495
+ $$
496
+
497
+ Proof. Note that we do not show that a mixture of multivariate Gaussian distributions is the best candidate to approximate a finite sum of Dirac deltas. However, note that a multivariate Gaussian distribution is a standard approximation of a Dirac delta function and can, in the limit of a small covariance matrix, i.e. $|\Sigma| \ll 1$ , approximate it.
498
+
499
+ The aim of this proof is to show that the mixture of Gaussians $p_M$ can be a candidate to approximate the Dirac deltas. From Theorem 1, this is equivalent to showing that $p_M$ is a $L^2$ function.
500
+
501
+ To prove this, we need to integrate:
502
+
503
+ $$
504
+ \int_ {D} p _ {M} (\boldsymbol {x}) ^ {2} d \boldsymbol {x} = \sum_ {i = 1} ^ {k} w _ {i} ^ {2} \int_ {D} \mathcal {N} \left(\boldsymbol {x}; \boldsymbol {\mu} _ {i}, \boldsymbol {\Sigma} _ {i}\right) ^ {2} d \boldsymbol {x} + 2 \sum_ {i \neq j} w _ {i} w _ {j} \int_ {D} \mathcal {N} \left(\boldsymbol {x}; \boldsymbol {\mu} _ {i}, \boldsymbol {\Sigma} _ {i}\right) \mathcal {N} \left(\boldsymbol {x}; \boldsymbol {\mu} _ {j}, \boldsymbol {\Sigma} _ {j}\right) d \boldsymbol {x}. \tag {34}
505
+ $$
506
+
507
+ On the one hand, Lemma 1 shows that the integrals of the first sum are finite constants. On the other hand, the integrals on the second sum cannot be computed in a closed form, but it is well known that the product decays exponentially as $\| \pmb{x} \| \to \infty$ ensuring a finite integral. Therefore, the squared integral is just a sum of finite constants and hence finite.
508
+
509
+ # A.3 SAMPLING ALGORITHM
510
+
511
+ # Algorithm 2 Sampling
512
+
513
+ 1: $X_{T} \sim \lambda_{\epsilon}$
514
+ 2: for $t = T, \dots, 1$ do
515
+ 3: $\widetilde{X}_t^{thin} \sim g_\theta(\pmb{x} \in X_t^{thin} | X_t, t)$
516
+ 4: $\tilde{X}_0\setminus X_t\sim f_\theta (X|X_t,t)$
517
+ 5: $\tilde{X}_0 = (\tilde{X}_0\setminus X_t)\cup \tilde{X}_t^{thin}$
518
+ 6: $X_{t - 1}\sim q(X_{t - 1}\mid \widetilde{X}_0,X_t)$
519
+ 7: end for
520
+ 8: return $X_{t-1}$
521
+
522
+ # A.4 MODEL SETUP
523
+
524
+ Architecture: The classifier to predict $X_0 \cap X_t$ is a MLP with 3 layers and ReLU as activation function. The mixture of multivariate Gaussian distribution that approximates $X_0 \setminus X_t$ contains 16 components, and the parameters are learned with an MLP of 2 layers and ReLU as an activation function.
525
+
526
+ Training: All models have been trained on an NVIDIA A100-PCIE-40GB. We use Adam as the optimizer and a fixed weight decay of 0.0001 to avoid overfitting. To avoid exploding gradients, we clip the gradients to have a norm lower than 2.
527
+
528
+ Hyperparameters: We use the same hyperparameters for all datasets and types of point processes. In a hyperparameter study A.8, we have found $T = 100$ for our cosine noise schedule (Nichol et al., 2021) to give a good trade off between sampling time and quality. Further, we leverage a hidden dimension and embedding size of 32. For training, we use a batch size of 128 and a learning rate of 0.001.
529
+
530
+ Early stopping: We train the models up to 5000 epochs with early stopping, sampling 100 sequences from the model and comparing them to the validation split, with WD-SL metric for SPP and the CD-MMD metric for STPPs.
531
+
532
+ # A.5 EXPERIMENTAL RESULTS WITH STANDARD DEVIATIONS
533
+
534
+ Table 5: Density estimation results on the hold-out test set for SPPs averaged over three random seeds.
535
+
536
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td></tr><tr><td>Log-Gaussian Cox</td><td>0.047±0.014</td><td>0.214±0.004</td><td>0.209±0.011</td><td>0.340±0.008</td><td>0.104±0.017</td><td>0.336±0.014</td><td>0.017±0.004</td><td>0.285±0.004</td></tr><tr><td>Regularized Method</td><td>2.361±0.064</td><td>0.391±0.004</td><td>0.255±0.011</td><td>0.411±0.003</td><td>0.097±0.008</td><td>0.342±0.008</td><td>0.039±0.003</td><td>0.411±0.004</td></tr><tr><td>POINT SET DIFFUSION</td><td>0.038±0.003</td><td>0.173±0.004</td><td>0.199±0.002</td><td>0.268±0.016</td><td>0.056±0.020</td><td>0.092±0.020</td><td>0.017±0.003</td><td>0.099±0.006</td></tr></table>
537
+
538
+ Table 6: Conditional generation results on the hold-out test set for SPP averaged over three random seeds.
539
+
540
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>MAE(↓)</td><td>WD(↓)</td><td>MAE(↓)</td><td>WD(↓)</td><td>MAE(↓)</td><td>WD(↓)</td><td>MAE(↓)</td><td>WD(↓)</td></tr><tr><td>Regularized Method</td><td>30.419 ± 0.278</td><td>0.162 ± 0.003</td><td>16.075 ± 0.236</td><td>0.148 ± 0.001</td><td>7.740 ± 0.173</td><td>0.115 ± 0.001</td><td>3.547 ± 0.104</td><td>0.150 ± 0.003</td></tr><tr><td>POINT SET DIFFUSION</td><td>4.651 ± 0.159</td><td>0.106 ± 0.001</td><td>5.056 ± 0.115</td><td>0.119 ± 0.001</td><td>3.498 ± 0.365</td><td>0.085 ± 0.014</td><td>2.256 ± 0.037</td><td>0.122 ± 0.001</td></tr></table>
541
+
542
+ Table 7: Density estimation results on the hold-out test set for STPP averaged over three random seeds.
543
+
544
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td><td>SL(↓)</td><td>MMD(↓)</td></tr><tr><td>DeepSTPP</td><td>0.105±0.027</td><td>0.266±0.041</td><td>0.169±0.089</td><td>0.166±0.177</td><td>3.257±0.685</td><td>0.677±0.056</td><td>1.067±0.893</td><td>0.197±0.152</td></tr><tr><td>DiffSTPP</td><td>0.088±0.009</td><td>0.064±0.024</td><td>0.332±0.012</td><td>0.146±0.026</td><td>0.560±0.045</td><td>0.611±0.113</td><td>0.196±0.098</td><td>0.055±0.005</td></tr><tr><td>AutoSTPP</td><td>0.073±0.007</td><td>0.062±0.004</td><td>0.364±0.040</td><td>0.280±0.202</td><td>0.598±0.047</td><td>0.331±0.099</td><td>0.127±0.004</td><td>0.147±0.005</td></tr><tr><td>POINT SET DIFFUSION</td><td>0.042±0.003</td><td>0.023±0.003</td><td>0.189±0.006</td><td>0.043±0.003</td><td>0.032±0.004</td><td>0.020±0.001</td><td>0.023±0.003</td><td>0.020±0.001</td></tr></table>
545
+
546
+ Table 8: Forecasting results on the hold-out test set for STPP averaged over three random seeds.
547
+
548
+ <table><tr><td rowspan="2"></td><td colspan="2">Earthquakes</td><td colspan="2">Covid NJ</td><td colspan="2">Citybike</td><td colspan="2">Pinwheel</td></tr><tr><td>MAE(↓)</td><td>CD(↓)</td><td>MAE(↓)</td><td>CD(↓)</td><td>MAE(↓)</td><td>CD(↓)</td><td>MAE(↓)</td><td>CD(↓)</td></tr><tr><td>DeepSTPP</td><td>10.154 ± 0.918</td><td>11.211 ± 0.738</td><td>6.264 ± 0.378</td><td>8.492 ± 0.196</td><td>127.968 ± 33.298</td><td>125.747 ± 32.705</td><td>18.651 ± 7.159</td><td>15.792 ± 5.323</td></tr><tr><td>DiffSTPP</td><td>16.027 ± 6.833</td><td>17.466 ± 5.748</td><td>18.822 ± 3.381</td><td>14.302 ± 0.216</td><td>7.516 ± 1.973</td><td>8.460 ± 1.773</td><td>14.461 ± 4.816</td><td>13.062 ± 3.901</td></tr><tr><td>POINT SET DIFFUSION</td><td>7.407 ± 0.285</td><td>10.458 ± 0.218</td><td>7.293 ± 0.082</td><td>10.865 ± 0.130</td><td>5.928 ± 2.881</td><td>7.225 ± 2.802</td><td>6.341 ± 0.108</td><td>6.437 ± 0.124</td></tr></table>
549
+
550
+ # A.6 PERFORMANCE COMPARISON TO ADD-THIN ON THEIR TPP EXPERIMENTS
551
+
552
+ We compare our POINT SET DIFFUSION to ADD-THIN (Ludke et al., 2023) on their TPP experiments. We use the same training and hyper-parameter setup for our model as in the SPP and STPP experiments. For details on the experimental setup, please refer to section 5 of Ludke et al. (2023).
553
+
554
+ # A.6.1 DENSITY ESTIMATION
555
+
556
+ Table 9: MMD (↓) between the TPP distribution of sampled sequences and hold-out test set (bold best).
557
+
558
+ <table><tr><td></td><td>Hawkes1</td><td>Hawkes2</td><td>SC</td><td>IPP</td><td>RP</td><td>MRP</td><td>PUBG</td><td>Reddit-C</td><td>Reddit-S</td><td>Taxi</td><td>Twitter</td><td>Yelp1</td><td>Yelp2</td></tr><tr><td>ADD-THIN</td><td>0.02</td><td>0.02</td><td>0.19</td><td>0.03</td><td>0.02</td><td>0.10</td><td>0.03</td><td>0.01</td><td>0.02</td><td>0.04</td><td>0.04</td><td>0.08</td><td>0.04</td></tr><tr><td>POINT SET DIFFUSION</td><td>0.03</td><td>0.03</td><td>0.19</td><td>0.02</td><td>0.04</td><td>0.07</td><td>0.05</td><td>0.01</td><td>0.02</td><td>0.11</td><td>0.09</td><td>0.06</td><td>0.06</td></tr></table>
559
+
560
+ Table 10: Wasserstein distance $(\downarrow)$ between the distribution of the number of events of sampled sequences and hold-out test set (bold best).
561
+
562
+ <table><tr><td></td><td>Hawkes1</td><td>Hawkes2</td><td>SC</td><td>IPP</td><td>RP</td><td>MRP</td><td>PUBG</td><td>Reddit-C</td><td>Reddit-S</td><td>Taxi</td><td>Twitter</td><td>Yelp1</td><td>Yelp2</td></tr><tr><td>ADD-THIN</td><td>0.04</td><td>0.02</td><td>0.08</td><td>0.01</td><td>0.02</td><td>0.04</td><td>0.02</td><td>0.03</td><td>0.04</td><td>0.03</td><td>0.01</td><td>0.04</td><td>0.02</td></tr><tr><td>POINT SET DIFFUSION</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.01</td><td>0.03</td><td>0.02</td><td>0.01</td><td>0.03</td><td>0.03</td><td>0.10</td><td>0.01</td><td>0.03</td><td>0.03</td></tr></table>
563
+
564
+ # A.6.2 CONDITIONAL GENERATION - FORECASTING
565
+
566
+ Table 11: Wasserstein distance $(\downarrow)$ between forecasted event sequence and ground truth reported for 50 random forecast windows on the test set (lower is better).
567
+
568
+ <table><tr><td></td><td>PUBG</td><td>Reddit-C</td><td>Reddit-S</td><td>Taxi</td><td>Twitter</td><td>Yelp1</td><td>Yelp2</td></tr><tr><td>Average Seq. Length</td><td>76.5</td><td>295.7</td><td>1129.0</td><td>98.4</td><td>14.9</td><td>30.5</td><td>55.2</td></tr><tr><td>ADD-THIN</td><td>2.03</td><td>17.18</td><td>21.32</td><td>2.42</td><td>1.48</td><td>1.00</td><td>1.54</td></tr><tr><td>POINT SET DIFFUSION</td><td>1.98</td><td>16.90</td><td>16.23</td><td>2.52</td><td>1.51</td><td>0.96</td><td>1.50</td></tr></table>
569
+
570
+ Table 12: Count MAPE $\times 100\%$ (↓) between forecasted event sequences and ground truth reported for 50 random forecast windows on the test set (lower is better).
571
+
572
+ <table><tr><td></td><td>PUBG</td><td>Reddit-C</td><td>Reddit-S</td><td>Taxi</td><td>Twitter</td><td>Yelp1</td><td>Yelp2</td></tr><tr><td>Average Seq. Length</td><td>76.5</td><td>295.7</td><td>1129.0</td><td>98.4</td><td>14.9</td><td>30.5</td><td>55.2</td></tr><tr><td>ADD-THIN</td><td>0.45</td><td>1.07</td><td>0.38</td><td>0.37</td><td>0.69</td><td>0.45</td><td>0.50</td></tr><tr><td>POINT SET DIFFUSION</td><td>0.44</td><td>1.13</td><td>0.26</td><td>0.41</td><td>0.60</td><td>0.46</td><td>0.47</td></tr></table>
573
+
574
+ # A.7 ADDITIONAL MATERIAL FOR COMPUTATIONAL COMPLEXITY OF STPP MODELS
575
+
576
+ Table 13: Number of learnable parameters per model.
577
+
578
+ <table><tr><td>DEEPSTPP</td><td>DIFFSTPP</td><td>AUTOSTPP</td><td>POINT SET DIFFUSION</td></tr><tr><td>~450,000</td><td>~1,600,000</td><td>~1,000,000</td><td>~25,000</td></tr></table>
579
+
580
+ Table 14: Training runtime in minutes averaged over three random seeds (all models have been trained on an A100).
581
+
582
+ <table><tr><td></td><td>Earthquakes</td><td>Covid NJ</td><td>Citybike</td><td>Pinwheel</td></tr><tr><td>DEEPSTPP</td><td>32</td><td>28</td><td>71</td><td>17</td></tr><tr><td>DIFFSTPP</td><td>469</td><td>492</td><td>832</td><td>392</td></tr><tr><td>AUTOSTPP</td><td>99</td><td>156</td><td>523</td><td>36</td></tr><tr><td>POINT SET DIFFUSION</td><td>52</td><td>42</td><td>68</td><td>50</td></tr></table>
583
+
584
+ # A.8 HYPERPARAMETER STUDY $T$
585
+
586
+ To provide insight into how the number of steps affects sample quality, we have run a hyperparameter study for the unconditional STPP experiment on the validation set of the Earthquake dataset, evaluating $T \in \{20, 50, 100, 200\}$ , averaged over three random seeds. Our findings indicate that while fewer diffusion steps result in reduced sample quality, $T = 100$ strikes a good balance, already matching and even surpassing the quality observed at $T = 200$ . Although this result may seem counterintuitive to those familiar with standard Gaussian diffusion models, it highlights a key distinction of our approach: unlike Gaussian diffusion processes, our model employs inherently discrete Markov steps—specifically, the superposition and thinning of point sets with fixed cardinality. As a result, only a limited number of points can be added or removed over $T$ steps, imposing a natural ceiling on how much additional steps can improve sample quality.
587
+
588
+ Table 15: STPP density estimation results on the Earthquake validation set for $T \in \{20,50,100,200\}$ reported as the average and standard error over three random seeds.
589
+
590
+ <table><tr><td></td><td>20</td><td>50</td><td>100</td><td>200</td></tr><tr><td>SL (↓)</td><td>0.018 ± 0.002</td><td>0.017 ± 0.002</td><td>0.014 ± 0.002</td><td>0.015 ± 0.001</td></tr><tr><td>MMD (↓)</td><td>0.020 ± 0.0015</td><td>0.020 ± 0.0002</td><td>0.018 ± 0.0012</td><td>0.018 ± 0.0005</td></tr></table>
591
+
592
+ # A.9 STPP FORECASTING DENSITY EVOLUTION
593
+
594
+ ![](images/6adb6ab1e6bf415d506d7df6e19b51348311f153af3bd22d8a1121fb4f160321.jpg)
595
+ Figure 8: Evolution of two STPP forecasts of POINT SET DIFFUSION across time $(0\to t_{max})$ Density plot of forecast for a sliding window of $\frac{1}{6}$ of the maximum time, black crosses represent history (conditioning), blue ground-truth future.
2025/Unlocking Point Processes through Point Set Diffusion/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17bd2c444fd03c25237f22933f5fa91f51b86c67b630e0d4816cee339f3b02c8
3
+ size 951887
2025/Unlocking Point Processes through Point Set Diffusion/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking the Potential of Model Calibration in Federated Learning/430190dc-b0a8-4e8b-858f-8d860a87a1e9_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking the Potential of Model Calibration in Federated Learning/430190dc-b0a8-4e8b-858f-8d860a87a1e9_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking the Potential of Model Calibration in Federated Learning/430190dc-b0a8-4e8b-858f-8d860a87a1e9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc65813e341e530566f3b7379d25c89db806d16a7cd8f01e19326a31277302ba
3
+ size 4814878
2025/Unlocking the Potential of Model Calibration in Federated Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unlocking the Potential of Model Calibration in Federated Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cbdd35796be355c963d79b4cceaa1a049bcbcdb625edc8ba156490a883a6ad5
3
+ size 3844044
2025/Unlocking the Potential of Model Calibration in Federated Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/dc91f6c8-22c0-4dce-99c6-519db6833887_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/dc91f6c8-22c0-4dce-99c6-519db6833887_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/dc91f6c8-22c0-4dce-99c6-519db6833887_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:504fe6af3550f9afb18b26f2c39252c26026395519425b5d4c5c90cd34aa2459
3
+ size 34429236
2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/full.md ADDED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # UNPOSED SPARSE VIEWS ROOM LAYOUT RECONSTRUCTION IN THE AGE OF PRETRAIN MODEL
2
+
3
+ Yaxuan Huang $^{1*}$
4
+ Xiangyu Yue $^{5\dagger}$
5
+
6
+ Xili Dai\*
7
+
8
+ Jianan Wang3
9
+
10
+ Xianbiao Qi
11
+
12
+ Yixing Yuan
13
+
14
+ $^{1}$ Hong Kong Center for Construction Robotics, The Hong Kong University of Science and Technology
15
+ <sup>2</sup>The Hong Kong University of Science and Technology (Guangzhou)
16
+ $^{3}$ Atribot $^{4}$ Intellifusion Inc.
17
+ <sup>5</sup>Multimedia Lab (MMLab) and SHIAE, The Chinese University of Hong Kong
18
+
19
+ ![](images/39ed962f3795ab74173eba07280f3facbd1c32ba189f11d2ce3771f10f3c2683.jpg)
20
+ Figure 1: We present a novel method for estimating room layouts from a set of unconstrained indoor images. Our approach demonstrates robust generalization capabilities, performing well on both in-the-wild datasets (Zhou et al., 2018) and out-of-domain cartoon (Weber et al., 2024) data.
21
+
22
+ ![](images/12ce5dfa8e5900102f2fa76583de4a21fab3e80f52d8832780609cecf464af75.jpg)
23
+
24
+ # ABSTRACT
25
+
26
+ Room layout estimation from multiple-perspective images is poorly investigated due to the complexities that emerge from multi-view geometry, which requires multi-step solutions such as camera intrinsic and extrinsic estimation, image matching, and triangulation. However, in 3D reconstruction, the advancement of recent 3D foundation models such as DUSt3R has shifted the paradigm from the traditional multi-step structure-from-motion process to an end-to-end single-step approach. To this end, we introduce Plane-DUSt3R, a novel method for multiview room layout estimation leveraging the 3D foundation model DUSt3R. Plane-DUSt3R incorporates the DUSt3R framework and fine-tunes on a room layout dataset (Structure3D) with a modified objective to estimate structural planes. By generating uniform and parsimonious results, Plane-DUSt3R enables room layout estimation with only a single post-processing step and 2D detection results. Unlike previous methods that rely on single-perspective or panorama image, Plane-DUSt3R extends the setting to handle multiple-perspective images. Moreover, it offers a streamlined, end-to-end solution that simplifies the process and reduces error accumulation. Experimental results demonstrate that Plane-DUSt3R not only outperforms state-of-the-art methods on the synthetic dataset but also proves robust and effective on in the wild data with different image styles such as cartoon. Our code is available at: https://github.com/justacar/Plane-DUSt3R
27
+
28
+ # 1 INTRODUCTION
29
+
30
+ 3D room layout estimation aims to predict the overall spatial structure of indoor scenes, playing a crucial role in understanding 3D indoor scenes and supporting a wide range of applications. For example, room layouts could serve as a reference for aligning and connecting other objects in indoor environment reconstruction (Nie et al., 2020). Accurate layout estimation also aids robotic path planning and navigation by identifying passable areas (Mirowski et al., 2016). Additionally, room
31
+
32
+ Layouts are essential in tasks such as augmented reality (AR) where spatial understanding is critical. Therefore, 3D room layout estimation has attracted considerable research attention with continued development of datasets (Zheng et al., 2020; Wang et al., 2022) and methods (Yang et al., 2022; Stekovic et al., 2020; Wang et al., 2022) over the past few decades.
33
+
34
+ Methods for 3D room layout estimation (Zhang et al., 2015; Hedau et al., 2009; Yang et al., 2019) initially relied on the Manhattan assumption with a single perspective or panorama image as input. Over time, advancements (Stekovic et al., 2020) have relaxed the Manhattan assumption to accommodate more complex settings, such as the Atlanta model, or even no geometric assumption at all. Recently, Wang et al. (2022) introduced a "multi-view" approach, capturing a single room with two panorama images, marking the first attempt to extend the input from a single image to multiple images. Despite this progress, exploration in this direction remains limited, hindered by the lack of well-annotated multi-view 3D room layout estimation dataset.
35
+
36
+ Currently, multi-view datasets with layout annotations are very scarce. Even the few existing datasets, such as Structure3D (Zheng et al., 2020), provide only a small number of perspective views (typically ranging from 2 to 5). This scarcity of observable views highlights a critical issue: wide-baseline sparse-view structure from motion (SfM) remains an open problem. Most contemporary multi-view methods (Wang et al., 2022; Hu et al., 2022) assume known camera poses or start with noisy camera pose estimates. Therefore, solving wide-baseline sparse-view SfM would significantly advance the field of multi-view 3D room layout estimation. The recent development of large-scale training and improved model architecture offers a potential solution. While GPT-3 (Brown, 2020) and Sora (Brooks et al., 2024) have revolutionized NLP and video generation, DUSt3R (Wang et al., 2024) brings a paradigm shift for multi-view 3D reconstruction, transitioning from a multi-step SfM process to an end-to-end approach. DUSt3R demonstrates the ability to reconstruct scenes from unposed images, without camera intrinsic/extrinsic or even view overlap. For example, with two unposed, potentially non-overlapping views, DUSt3R could generate a 3D pointmap while inferring reasonable camera intrinsic and extrinsic, providing an ideal solution to the challenges posed by wide-baseline sparse-view SfM in multi-view 3D room layout estimation.
37
+
38
+ In this paper, we employ DUSt3R to tackle the multi-view 3D room layout estimation task. Most single-view layout estimation methods (Yang et al., 2022) follow a two-step process: 1) extracting 2D & 3D information, and 2) lifting the results to a 3D layout with layout priors. When extending this approach to multi-view settings, an additional step is required: establishing geometric primitive correspondence across multi-view before the 3D lifting step. Given the limited number of views in existing multi-view layout datasets, this correspondence-establishing step essentially becomes a sparse-view SfM problem. Hence, incorporating a single-view layout estimation method with DUSt3R to handle multi-view layout estimation is a natural approach. However, this may introduce a challenge: independent plane normal estimation for each image fails to leverage shared information across views, potentially reducing generalizability to unseen data in the wild. To this end, we adopt DUSt3R to solve correspondence establishment and 3D lifting simultaneously, which jointly predict plane normal and lift 2D detection results to 3D. Specifically, we modify DUSt3R to estimate room layouts directly through dense 3D point representation (pointmap), focusing exclusively on structural surfaces while ignoring occlusions. This is achieved by retraining DUSt3R with the objective to predict only structural planes, the resulting model is named Plane-DUSt3R. However, dense pointmap representation is redundant for room layout, as a plane can be efficiently represented by its normal and offset rather than a large number of 3D points, which may consume significant space. To streamline the process, we leverage well-established off-the-shelf 2D plane detector to guide the extraction of plane parameters from the pointmap. We then apply post-processing to obtain plane correspondences across different images and derive their adjacency relationships.
39
+
40
+ Compared to existing room layout estimation methods, our approach introduces the first pipeline capable of unposed multi-view (perspective images) layout estimation. Our contributions can be summarized as follows:
41
+
42
+ 1. We propose an unposed multi-view (sparse-view) room layout estimation pipeline. To the best of our knowledge, this is the first attempt at addressing this natural yet underexplored setting in room layout estimation.
43
+ 2. The introduced pipeline consists of three parts: 1) a 2D plane detector, 2) a 3D information prediction and correspondence establishment method, Plane-DUSt3R, and 3) a post-processing algorithm. The 2D detector was retrained with SOTA results on the Structure3D dataset (see Ta
44
+
45
+ ble 3). The Plane-DUSt3R achieves a $5.27\%$ and $5.33\%$ improvement in RRA and mAA metrics, for the multi-view correspondence task compared to state-of-the-art methods (see Table 2).
46
+
47
+ 3. In this novel setting, we also design several baseline methods for comparison to validate the advantages of our pipeline. Specifically, we outperform the baselines by 4 projection 2D metrics and 1 3D metric respectively (see Table 1). Furthermore, our pipeline not only performs well on the Structure3D dataset (see Figure 6), but also generalizes effectively to in-the-wild datasets (Zhou et al., 2018) and scenarios with different image styles such as cartoon style (see Figure 1).
48
+
49
+ # 2 RELATED WORK
50
+
51
+ Layout estimation. Most room layout estimation research focuses on single-perspective image inputs. Stekovic et al. (2020) formulates layout estimation as a constrained discrete optimization problem to identify 3D polygons. Yang et al. (2022) introduces line-plane constraints and connectivity relations between planes for layout estimation, while Sun et al. (2019) formulates the task as predicting 1D layouts. Other studies, such as Zou et al. (2018), propose to utilize monocular 360-degree panoramic images for more information. Several works extend the input setting from single panoramic to multi-view panoramic images, e.g. Wang et al. (2022) and Hu et al. (2022). However, there is limited research addressing layout estimation from multi-view RGB perspective images. Howard-Jenkins et al. (2019) detects and regresses 3D piece-wise planar surfaces from a series of images and clusters them to obtain the final layout, but this method requires posed images. The most related work is Jin et al. (2021), which focuses on a different task: reconstructing indoor scenes with planar surfaces from wide-baseline, unposed images. It is limited to two views and requires an incremental stitching process to incorporate additional views.
52
+
53
+ Holistic scene understanding. Traditional 3D indoor reconstruction methods are widely applicable but often lack explicit semantic information. To address this limitation, recent research has increasingly focused on incorporating holistic scene structure information, enhancing scene understanding by improving reasoning about physical properties, mostly centered on single-perspective images. Several studies have explored the detection of 2D line segments using learning-based detectors (Zhou et al., 2019; Pautrat et al., 2021; Dai et al., 2022). However, these approaches often struggle to differentiate between texture-based lines and structural lines formed by intersecting planes. Some research has focused on planar reconstruction to capture higher-level information (Liu et al., 2018; Yu et al., 2019; Liu et al., 2019). Certain studies (Huang et al., 2018; Nie et al., 2020; Sun et al., 2021) have tackled multiple tasks alongside layout reconstruction, such as depth estimation, object detection, and semantic segmentation. Other works operate on constructed point maps; for instance, Yue et al. (2023) reconstructs floor plans from density maps by predicting sequences of room corners to form polygons. SceneScript (Avetisyan et al., 2024) employs large language models to represent indoor scenes as structured language commands.
54
+
55
+ Multi-view pose estimation and reconstruction. The most widely applied pipeline for pose estimation and reconstruction on a series of images involves SfM (Schönberger & Frahm, 2016) and MVS (Schönberger et al., 2016), which typically includes steps such as feature mapping, finding correspondences, solving triangulations and optimizing camera parameters. Most mainstream methods build upon this paradigm with improvements on various aspects of the pipeline. However, recent works such as DUSt3R (Wang et al., 2024) and MASt3R (Leroy et al., 2024) propose a reconstruction pipeline capable of producing globally-aligned pointmaps from unconstrained images. This is achieved by casting the reconstruction problem as a regression of pointmaps, significantly relaxing input requirements and establishing a simpler end-to-end paradigm for 3D reconstruction.
56
+
57
+ # 3 METHOD
58
+
59
+ In this section, we formulate the layout estimation task, transitioning from a single-view to a multiview scenario. We then derive our multi-view layout estimation pipeline as shown in Figure 2 (Section 3.1). Our pipeline consists of three parts: a 2D plane detector $f_{1}$ , a 3D information prediction and correspondence establishment method Plane-DUSt3R $f_{2}$ (Section 3.2), and a post-processing algorithm $f_{3}$ (Section 3.3).
60
+
61
+ ![](images/db90106653b783d6c09868fe708211264f559cf9a6e57aff33d3cff297c54d7e.jpg)
62
+ Figure 2: Our multi-view room layout estimation pipeline. It consists of three parts: 1) a 2D plane detector $f_{1}$ , 2) a 3D information prediction and correspondence establishment method Plane-DUSt3R $f_{2}$ , and 3) a post-processing algorithm $f_{3}$ .
63
+
64
+ # 3.1 FORMULATION OF THE MULTI-VIEW LAYOUT ESTIMATION TASK
65
+
66
+ We begin by revisiting the single-view layout estimation task and unifying the formulation of existing methods. Next, we extend the formulation from single-view to multiple-view setting, providing a detailed analysis and discussion focusing on the choice of solutions. Before formulating the layout estimation task, we adopt the "geometric primitives + relationships" representation from Zheng et al. (2020) to model the room layout.
67
+
68
+ # Geometric Primitives.
69
+
70
+ - Planes: The scene layout could be represented as a set of planes $\{P_1, P_2, \ldots\}$ in 3D space and their corresponding 2D projections $\{p_1, p_2, \ldots\}$ in images. Each plane is parameterized by its normal $\mathbf{n} \in \mathbb{S}^2$ and offset $d$ . For a 3D point $\mathbf{x} \in \mathbb{R}^3$ lying on the plane, we have $\mathbf{n}^T \mathbf{x} + d = 0$ .
71
+ - Lines & Junction Points: In 3D space, two planes intersect at a 3D line, three planes intersect at a 3D junction point. We denote the set of all 3D lines/junction points in the scene as $\{L_1, L_2 \ldots\} / \{J_1, J_2 \ldots\}$ and their corresponding 2D projections as $\{l_1, l_2, \ldots\} / \{j_1, j_2, \ldots\}$ in images.
72
+
73
+ # Relationships.
74
+
75
+ - Plane/Line relationships: An adjacent matrix $\mathbf{W}_p / \mathbf{W}_l \in \{0,1\}$ is used to model the relationship between planes/lines. Specifically, $\mathbf{W}_p(i,j) = 1$ if and only if $\mathbf{P}_i$ and $\mathbf{P}_j$ intersect along a line; otherwise, $\mathbf{W}_p(i,j) = 0$ . Similarly to plane relationship, $\mathbf{W}_l(i,j) = 1$ if and only if $\mathbf{L}_i$ and $\mathbf{L}_j$ intersect at a certain junction, otherwise, $\mathbf{W}_l(i,j) = 0$ .
76
+
77
+ The pipeline of single-view layout estimation methods (Liu et al., 2019; Yang et al., 2022; Liu et al., 2018; Stekovic et al., 2020) can be formulated as:
78
+
79
+ $$
80
+ \mathcal {I} \xrightarrow {f _ {1}} \{2 D, 3 D \} \xrightarrow {f _ {3}} \{P, L, J, W \}, \tag {1}
81
+ $$
82
+
83
+ where $f_{1}$ is a function that predicts 2D and 3D information from the input single view. Generally speaking, the final layout result $\{P, L, J, W\}$ can be directly inferred from the outputs of $f_{1}$ . However, errors arising from $f_{1}$ usually adversely affect the results. Hence, a refinement step that utilizes prior information about room layout is employed to further improve the performance. Therefore, $f_{3}$ typically encompasses post-processing and refinement steps where the post-processing step generates an initial layout estimation, and the refinement step improves the final results.
84
+
85
+ For instance, Yang et al. (2022) chooses the HRnet network (Wang et al., 2020) as $f_{1}$ backbone to extract 2D plane $p$ , line $l$ , and predict 3D plane normal $n$ and offset $d$ from the input single view. After obtaining the initial 3D layout from the outputs of $f_{1}$ , the method reprojects the 3D
86
+
87
+ line to a 2D line $\hat{l}$ on the image and compares it with the detected line $l$ from $f_{1}$ . $f_{3}$ minimizes the error $\| \hat{l} - l\|_2^2$ to optimize the 3D plane normal. In other words, it uses the better-detected 2D line to improve the estimated 3D plane normal. In contrast, Stekovic et al. (2020) uses a different approach: its $f_{1}$ predicts a 2.5D depth map instead of a 2D line $l$ and uses the more accurate depth results to refine the estimated 3D plane normal. Among the works that follow the general framework of 1 (Liu et al., 2019; 2018), Yang et al. (2022) stands out as the best single-view perspective image layout estimation method without relying on the Manhattan assumption. Therefore, we present its formulation in equation (2) and extend it to multi-view scenarios.
88
+
89
+ $$
90
+ \mathcal {I} \xrightarrow {f _ {1}} \{p, l, \boldsymbol {n}, d \} \xrightarrow {f _ {3}} \{\boldsymbol {P}, \boldsymbol {L}, \boldsymbol {J}, \boldsymbol {W} \}, \tag {2}
91
+ $$
92
+
93
+ In room layout estimation from unposed multi-view images, two primary challenges aris: 1) camera pose estimation, and 2) 3D information estimation from multi-view inputs. Camera pose estimation is particularly problematic given the scarcity of annotated multi-view layout dataset. Thanks to the recent advancements in 3D vision with pretrain model, this challenge could be effectively bypassed: DUSt3R (Wang et al., 2024) has demonstrated the ability to reconstruct scenes from unposed images without requiring camera intrinsic or extrinsic, and even without overlap between views. Moreover, the 3D pointmap generated from DUSt3R can provide significantly improved 3D information, such as plane normal and offset, compared to single-view methods (Yang et al., 2022) (see Table 1 of experiment section). Therefore, DUSt3R represents a critical advancement in extending single-view layout estimation to multi-view scenarios. Before formulating the multi-view solution, we first present the key 3D representation of DUSt3R: the pointmap $X$ and the camera pose $T$ . The camera pose $T$ is obtained through global alignment, as described in the DUSt3R (Wang et al., 2024)).
94
+
95
+ - Pointmap $X$ : Given a set of RGB images $\{\mathcal{I}_1, \ldots, \mathcal{I}_n\} \in \mathbb{R}^{H \times W \times 3}$ , captured from distinct viewpoints of the same indoor scene, we associate each image $\mathcal{I}_i$ with a canonical pointmap $X_i \in \mathbb{R}^{H \times W \times 3}$ . The pointmap represents a one-to-one mapping from each pixel $(u, v)$ in the image to a corresponding 3D point in the world coordinate frame: $(u, v) \in \mathbb{R}^2 \mapsto X(u, v) \in \mathbb{R}^3$ .
96
+ - Camera Pose $T$ : Each image $\mathcal{I}_i$ is associated with a camera-to-world pose $T_i \in SE(3)$ .
97
+
98
+ Now, the sparse-view layout estimation problem can be formulated as shown in equation (3)
99
+
100
+ $$
101
+ \left\{\mathcal {I} _ {1}, \mathcal {I} _ {2}, \dots \right\} \xrightarrow {f _ {1} , f _ {2}} \left\{p, l, \boldsymbol {X}, \boldsymbol {T} \right\} \xrightarrow {f _ {3}} \left\{\boldsymbol {P}, \boldsymbol {L}, \boldsymbol {J}, \boldsymbol {W} \right\}. \tag {3}
102
+ $$
103
+
104
+ In this work, we adopt the HRnet backbone from Yang et al. (2022) as $f_{1}$ . In the original DUSt3R (Wang et al., 2024) formulation, the ground truth pointmap $X^{obj}$ represents the 3D coordinates of the entire indoor scene. In contrast, we are interested in plane pointmap $X^p$ that represents the 3D coordinates of structural plane surfaces, including walls, floors, and ceilings. This formulation intentionally disregards occlusions caused by non-structural elements, such as furniture within the room. Our objective is to predict the scene layout pointmap without occlusions from objects, even when the input images contain occluding elements. For simplicity, any subsequent reference to $X$ in this paper refers to the newly defined plane pointmap $X^p$ . We introduce Plane-DUSt3R as $f_{2}$ and directly infer the final layout via $f_{3}$ without the need for any refinement.
105
+
106
+ # 3.2 $f_{2}$ : PLANE-BASED DUST3R
107
+
108
+ The original DUSt3R outputs pointmaps that capture all 3D information in a scene, including furniture, wall decorations, and other objects. However, such excessive information introduces interference when extracting geometric primitives for layout prediction, such as planes and lines. To obtain a structural plane pointmap $\mathbf{X}$ , we modify the data labels from the original depth map (Figure 4 (a)) to the structural plane depth map (Figure 4 (b)), and then retrain the DUSt3R model. This updated objective guides DUSt3R to predict the pointmap of the planes while ignoring other objects. The original DUSt3R does not guarantee output at a metric scale, so we also trained a modified version of Plane-DUSt3R that produces metric-scale results.
109
+
110
+ Given a set of image pairs $\mathbb{P} = \{(\mathcal{I}_i,\mathcal{I}_j)\mid i\neq j,1\leq i,j\leq n,\mathcal{I}\in \mathbb{R}^{H\times W\times 3}\}$ , for each image pair, the model comprises two parallel branches. As shown in Figure 3, the detail of the architecture can be found in Appendix A. The regression loss function is defined as the
111
+
112
+ ![](images/ee58e09c6650f7331e5441b03ed6015b3a76465944023ff290652bb665b11aa4.jpg)
113
+ Figure 3: Plane-DUSt3R architecture remains identical to DUSt3R. The transformer decoder and regression head are further fine-tuned on the occlusion-free depth map (see Figure 4).
114
+
115
+ scale-invariant Euclidean distance between the normalized predicted and ground-truth pointmaps: $l_{\text{repr}}(v,i) = \|\frac{1}{z}\pmb{X}_i^v - \frac{1}{\bar{z}}\bar{\pmb{X}}_i^v\|_2^2$ , where view $v \in \{1,2\}$ and $i$ is the pixel index. The scaling factors $z$ and $\bar{z}$ represent the average distance of all corresponding valid points to the origin. In addition, by incorporating the confidence loss, the model implicitly learns to identify regions that are more challenging to predict. As in DUS3R (Wang et al., 2024), the confidence loss is defined as: $\mathcal{L}_{\mathrm{conf}} = \sum_{v \in \{1,2\}} \sum_{i \in \mathcal{D}^v} C_i^{v,1} \ell_{\mathrm{repr}}(v,i) - \alpha \log C_i^{v,1}$ , where $\mathcal{D}^v \subseteq \{1,\dots,H\} \times \{1,\dots,W\}$ are sets of valid pixels on which the ground truth is defined.
116
+
117
+ Structural plane depth map. The Structure3D dataset provides ground truth plane normal and offset, allowing us to re-render the plane depth map at the same camera pose (as shown in Figure 4). We then transform the structural plane depth map $D^{p}$ to pointmap $X^{v}$ in the camera coordinate frame $v$ . This transformation is given by $\mathbf{X}_{i,j}^{v} = \mathbf{K}^{-1}[i\mathbf{D}_{i,j}^{p}, j\mathbf{D}_{i,j}^{p}, \mathbf{D}_{i,j}^{p}]^{\top}$ , where $\mathbf{K} \in \mathbb{R}^{3 \times 3}$ is the camera intrinsic matrix. Further details of this transformation can be found in Wang et al. (2024).
118
+
119
+ Metric-scale. In the multi-view setting, scale variance is required, which differs from DUSt3R. To accommodate this, we modify the regression loss to bypass normalization for the predicted pointmaps when the ground-truth pointmaps are metric. Specifically, we set $z \coloneqq \bar{z}$ whenever the ground-truth is metric, resulting in the following loss function $l_{\text{repr}}(v,i) = \|X_i^v - \bar{X}_i^v\|_2^2 / \bar{z}$ .
120
+
121
+ ![](images/e134dc61ed34a307d51df7645db639736ba4023cc80c9e925e8fcd33700786db.jpg)
122
+ (a) The original DUSt3R depth map.
123
+
124
+ ![](images/fcd58ced7cfe55c1ae03c9794e30cff3aa9e2ca28e032194b3ac37f6257267fe.jpg)
125
+ (b) The Plane-DUSt3R depth map.
126
+ Figure 4: The (a) original DUSt3R depth map and (b) occlusion removed depth map.
127
+
128
+ # 3.3 $f_{3}$ : POST-PROCESSING
129
+
130
+ In this section, we introduce how to combine the multi-view plane pointmaps $\mathbf{X}$ and 2D detection results $p, l$ to derive the final layout $\{\mathbf{P}, \mathbf{L}, \mathbf{J}, \mathbf{W}\}$ . For each single view $\mathcal{I}_i$ , we can infer a partial layout result $\{\tilde{P}_i, \tilde{L}_i, \tilde{J}_i, \tilde{W}_i\} = g_1(\mathbf{X}_i, p^i, l^i)$ from the single view pointmaps $\mathbf{X}_i$ and 2D detection results $p^i, l^i$ through a post-process algorithm $g_1$ in camera coordinate. Then, a correspondence-establish and merging algorithm $g_2$ combines all partial results to get the final layout $\{\mathbf{P}, \mathbf{L}, \mathbf{J}, \mathbf{W}\} = g_2(\{\tilde{P}_1, \tilde{L}_1, \tilde{J}_1, \tilde{W}_1\}, \ldots)$ .
131
+
132
+ Single-view room layout estimation $g_{1}$ . For an image $\mathcal{I}_i$ , $g_{1}$ mainly addresses two tasks: 1) lifting 2D planes to 3D camera coordinate space with 3D normal from pointmap $X_{i}$ , and 2) inferring the wall adjacency relationship. We follow the post-processing procedure in Yang et al. (2022) but with two improvements. First, the plane normal $n$ and offset $d$ are inferred from $X_{i}$ instead of directly regressed by network $f_{1}$ . The points from $X_{i}$ which belong to same plane are used to calculate $n$ and $d$ . Second, with the better 3D information pointmap $X_{i}$ we can better infer pseudo wall
133
+
134
+ ![](images/bb65b6d6dacfea3b7ceeacbe36d54839c15db7d46e13e6ade2cf37ff88666208.jpg)
135
+ (a) Projected Lines
136
+
137
+ ![](images/52414a0e4bb82f33fdaab246435ca638dad87efad0243353e1e874c6ebbdc263.jpg)
138
+ (b) Rotated Lines
139
+ Figure 5: (a) Planes are projected onto the x-z plane as 2D line segments. (b) The scene is rotated so that line segments are approximately horizontal or vertical. (c) Line segments are classified and aligned to be either horizontal or vertical. (d) Merged planes are shown, with segments belonging to the same plane indicated by the same color and index.
140
+
141
+ ![](images/853e398edcb21b9b5e5d64e3bffcebc82fefc7c3519143968b5d128a17e2a0ef.jpg)
142
+ (c) Aligned Lines
143
+
144
+ ![](images/ca3b8fa1d0e3a7d5029d2bbf3eb87647969502dfa070ab896649358d56060755.jpg)
145
+ (d) Correspondance
146
+
147
+ adjacency through the depth consistency of inferred plane intersection $\pmb{L}$ (inferred from 2D plane $p$ ) and predicted line region $\pmb{L}$ (extracted from the region of $\pmb{X}_i$ ). In our experiments, the depth consistency tolerance $\epsilon_1$ is set to 0.005.
148
+
149
+ Multi-view room layout estimation $g_{2}$ . Based on the results of $g_{1}, g_{2}$ uses the global alignment of DUSt3R (refer to appendix A) to get the camera pose $T_{i}$ for each image $\mathcal{I}_{i}$ . Then, we can transform all partial layout results $(\{\tilde{P}_1, \tilde{L}_1, \tilde{J}_1, \tilde{W}_1\}, \ldots)$ to the same coordinate space. In this coordinate space, we establish correspondence for each plane, then merge and assign a unique ID to them.
150
+
151
+ Since we allow at most one floor and one ceiling detection per image, we simply average the parameters from all images to obtain the final floor and ceiling parameters. As for walls, we assume all walls are perpendicular to both the floor and ceiling. To simplify the merging process, we project all walls onto the x-z plane defined by the floor and ceiling. This projection reduces the problem to a 2D space, making it easier to identify and merge corresponding walls. Figure 5 illustrates the entire process of merging walls. Each wall in an image is denoted as one line segment, as shown in Figure 5a. We then rotate the scene so that all line segments are approximately horizontal or vertical, as depicted in 5b. In Figure 5c, each line segment is classified and further rotated to be either horizontal or vertical, based on the assumption that all adjacent walls are perpendicular to each other.
152
+
153
+ Figure 5d shows the final result after the merging process. The merging process could be regarded as a classical Minimum Cut problem. In Figure 5d, all line segments can be regarded as a node in a graph, two nodes have a connection if and only if they satisfy two constraints. 1) they belong to the same categories (vertical or horizontal). 2) they do not appear in the same image. 3) they are not across with the other category of node. Finally, the weight of each connection is settled as the Euclidean distance of their line segment centers. Based on this established graph, the merging results are the optimal solution of the minimum cut on this graph. The detail of the merging process can be found in Algorithm 1 of Appendix.
154
+
155
+ # 4 EXPERIMENTS
156
+
157
+ # 4.1 SETTINGS.
158
+
159
+ Dataset. Structured3D (Zheng et al., 2020) is a synthetic dataset that provides a large collection of photo-realistic images with detailed 3D structural annotations. Similar to Yang et al. (2022), the dataset is divided into training, validation, and test sets at the scene level, comprising 3000, 250, and 250 scenes, respectively. Each scene consists of multiple rooms, with each room containing 1 to 5 images captured from different viewpoints. To construct image pairs that share similar visual content, we retain only rooms with at least two images. Within each room, images are paired to form image sets. Ultimately, we obtained 115,836 image pairs for the training set and 11,030 image pairs for the test set. For validation, we assess all rooms from the validation set. For rooms that only have one image, we duplicate that image to form image pairs for pointmap retrieval. In the subsequent inference process, we retain only one pointmap per room.
160
+
161
+ Training details. During training, we initialize the model with the original DUSt3R checkpoint. We freeze the encoder parameters and fine-tune only the decoder and DPT heads. Our data augmentation
162
+
163
+ ![](images/233d41a8238cda0a3dfd5ec143bee27ff777fca40f8d9d1f19ec9384587bf669.jpg)
164
+ Figure 6: Qualitative comparison on the Structure3D test set. From left to right: (1-3) input views of the scene, (4) result by baseline method (Noncuboid+MASt3R) shown in wireframe only, and (5) our method's result visualized with predicted point clouds. Please refer to the appendix for more complete results.
165
+
166
+ strategy follows the same approach as DUSt3R, using input resolution of $512 \times 512$ . We employ the AdamW optimizer (Loshchilov & Hutter, 2017) with a cosine learning rate decay schedule, starting with a base learning rate of 1e-4 and a minimum of 1e-6. The model is trained for 20 epochs, including 2 warm-up epochs, with a batch size of 16. We train two versions Plane-DUSt3R, one with metric-scale loss and the other one without it. We name the metric-scale one as Plane-DUSt3R (metric) and the other one as Plane-DUSt3R.
167
+
168
+ Evaluation. Following the task formulation in equation (3), our evaluation protocol consists of three parts to assess $f_{1}$ , $f_{2}$ , and the overall performance, respectively.
169
+
170
+ - For the 2D information extraction module $f_{1}$ , we use the same metric as Yang et al. (2022) for comparison: Intersection over Union (IoU), Pixel Error (PE), Edge Error (EE), and Root Mean Square Error (RMSE).
171
+ - For the multi-view information extraction module $f_{2}$ , we report the Relative Rotation Accuracy (RRA) and Relative Translation Accuracy (RTA) for each image pair to evaluate the relative pose error. We use a threshold of $\tau = 15$ to report RTA@15 and RRA@15 (The comprehensive results of different thresholds can be seen in Table 4 of Appendix C.1). Additionally, we calculate the mean Average Accuracy (mAA30), defined as the area under accuracy curve of the angular differences at min(RRA@30, RTA@30).
172
+ - Finally, for evaluating the overall layout estimation, we employ 3D precision and 3D recall of planes as metrics. A predicted plane is considered matched with a ground truth plane if and only if the angular difference between them is less than $10^{\circ}$ and the offset difference is less than $0.15\mathrm{m}$ . Each ground truth plane can be matched only once.
173
+
174
+ Baselines. As this work is the first attempt at 3D room layout estimation from multi-view perspective images, there are no existing baseline methods for direct comparison. Therefore, we design two reasonable baseline methods. We also compare our $f_{1}$ and $f_{2}$ with other methods of the same type.
175
+
176
+ - Since we use Noncuboid (Yang et al., 2022) as our $f_{1}$ , we not only compare it against the baselines from their paper (Liu et al., 2019; Stekovic et al., 2020), but also retrain it with better hyperparameters obtained through grid search.
177
+ - For $f_{2}$ (Plane-DUSt3R), we compare it to recent data-driven image matching DUSt3R (Wang et al., 2024) and MASt3R (Leroy et al., 2024).
178
+
179
+ - Finally, for the overall multi-view layout baseline, we design two methods: 1) Noncuboid with ground truth camera poses and 2) Noncuboid with MASt3R. In this context, we introduce the fusion of MASt3R (Leroy et al., 2024) and NonCuboid (Yang et al., 2022) as our baseline method. MASt3R further extends DUSt3R, enabling it to estimate camera poses at a metric scale from sparse image inputs. We employ the original NonCuboid method to obtain single-view layout reconstruction. Next, we utilize the predicted camera poses to unify all planes from their respective camera poses into a common coordinate system. For instance, we designate the coordinate system of the first image as the world coordinate system. We then perform the same operation as described in Sec 3.3 to achieve the final multi-view reconstruction. The Noncuboid with ground truth camera
180
+
181
+ ![](images/4fb15c25a467db0b103140d3f43484b927c4d819f53d8c70c454c13a29f08a8a.jpg)
182
+
183
+ ![](images/2abc6a265342c8aad8fd1566ed62fcea39cb7519b9bf5722e2e45b1df6172349.jpg)
184
+
185
+ ![](images/fd8f876befbea271ee5a914a454eb6765aefa5e2eadd44249cc7e235a3bee4af.jpg)
186
+
187
+ ![](images/37dc7dbbcea72b5ec3f56ac07398cf8bc3dcfdd39db1733a65db39c627d0a48a.jpg)
188
+
189
+ ![](images/c96a441fe3d1f6d163865631f585471b86969424261e0791b315691492cad3fe.jpg)
190
+
191
+ ![](images/16c7ee40b0b7b31827d6489f3f8c865424eecfe5f47d4dcf67fed00caa059d91.jpg)
192
+ Figure 7: Birdview of multi-view 3D planes aligned to the same coordinate. The first row shows 5 cases of our pipeline results after post-processing step. The second row is the results of Non-cuboid+MASt3R. Line segments of the same color indicate that they belong to the same plane.
193
+
194
+ ![](images/a2e082a307b17c3ff3690b551cb3080c72649380f827e97054ba4a6b261fb669.jpg)
195
+
196
+ ![](images/7c2044e54b19c90be3698f2b33ced66b69e8ea7fe403769a59e81867d18cbd50.jpg)
197
+
198
+ ![](images/c0ac6fa1f347f552ca6183ca4c07a6e8f20f84b8653f423c4483238f178bb17f.jpg)
199
+
200
+ ![](images/3e5cd0325e578ab4d34c078d868e0eb58e11e78d38f97a459bbc187fceb90f79.jpg)
201
+
202
+ Table 1: Quantitative results on Structure3D dataset.
203
+
204
+ <table><tr><td>Method</td><td>re-IoU(%)↑</td><td>re-PE(%) ↓</td><td>re-EE↓</td><td>re-RMSE↓</td><td>3D-precision(%)↑</td><td>3D-recall(%)↑</td></tr><tr><td>Noncuboid + MAST3R</td><td>74.51</td><td>8.57</td><td>12.72</td><td>0.4831</td><td>37.00</td><td>43.39</td></tr><tr><td>Noncuboid + GT pose</td><td>75.93</td><td>7.97</td><td>11.37</td><td>0.4457</td><td>46.96</td><td>50.59</td></tr><tr><td>Ours (metric)</td><td>75.34</td><td>8.60</td><td>10.83</td><td>0.4388</td><td>48.98</td><td>45.35</td></tr><tr><td>Ours (aligned)</td><td>76.84</td><td>7.82</td><td>9.53</td><td>0.4099</td><td>52.63</td><td>48.37</td></tr></table>
205
+
206
+ poses is introduced as an ablation study to eliminate the effects of inaccurate pose estimation. The experimental setup is the same as the Noncuboid with MASt3R pipeline, except for the use of ground truth camera poses instead of poses estimated by MASt3R.
207
+
208
+ # 4.2 MULTI-VIEW ROOM LAYOUT ESTIMATION RESULTS
209
+
210
+ In this section, we compare our multi-view layout estimation pipeline with two baseline methods, both qualitatively and quantitatively. Additionally, we conduct experiments to verify the effectiveness of our pipeline components $f_{1}$ 2D detector and $f_{2}$ Plane-DUSt3R.
211
+
212
+ Layout results comparison. Table 1 and Figure 6 present quantitative and qualitative comparisons of our pipeline with two baseline methods. Ours (metric) and Ours (aligned) in Table 1 refer to the methods from our pipeline using Plane-DUSt3R (metric) and Plane-DUSt3R, respectively. The first 4 metrics (re-IoU, re-PE, re-EE, and re-RMSE) are calculated similarly to their 2D counterparts (IoU, PE, EE, and RMSE), except that the predicted 2D results are reprojected from the estimated multi-view 3D layout. Compared with the baseline methods, Plane-DUSt3R achieves superior 3D plane normal estimations compared to Noncuboid's single-view plane normal estimation, even when using ground truth camera pose (Noncuboid + GT pose). Figure 7 further demonstrates that Plane-DUSt3R could predict accurate and robust 3D information with sparse-view input.
213
+
214
+ Table 2: Comparison with data-driven image matching approaches.
215
+
216
+ <table><tr><td rowspan="2" colspan="2">Methods</td><td>RealEstate10K</td><td colspan="3">Structured3D</td><td colspan="3">CAD-estate</td></tr><tr><td>mAA@30</td><td>RRA@15</td><td>RTA@15</td><td>mAA@30</td><td>RRA@15</td><td>RTA@15</td><td>mAA@30</td></tr><tr><td rowspan="2">(a)</td><td>DUSt3R (Wang et al., 2024)</td><td>61.2</td><td>89.44</td><td>85.00</td><td>76.13</td><td>99.88</td><td>84.82</td><td>76.38</td></tr><tr><td>MASt3R (Leroy et al., 2024)</td><td>76.4</td><td>92.94</td><td>89.77</td><td>85.34</td><td>99.94</td><td>99.00</td><td>95.29</td></tr><tr><td rowspan="2">(b)</td><td>Plane-DUSt3R (metric)</td><td>-</td><td>98.21</td><td>96.66</td><td>90.67</td><td>94.61</td><td>70.52</td><td>61.48</td></tr><tr><td>Plane-DUSt3R (aligned)</td><td>-</td><td>97.95</td><td>96.59</td><td>91.80</td><td>94.96</td><td>73.74</td><td>64.21</td></tr></table>
217
+
218
+ 3D information prediction and correspondence-established method Plane-DUSt3R $f_{2}$ . Table 2 shows the comparison results of our Plane-DUSt3R (part (b) in Table 2), recent popular data-driven image matching approaches (part (a) in Table 2) in RealEstate10K (Zhou et al., 2018), Structure3D (Zheng et al., 2020), and CAD-Estate (Rozumnyi et al., 2023) datasets. Note that in part(b), results on RealEstate10K dataset are not provided since our model is specifically designed to predict room structural elements, while RealEstate10K contains outdoor scenes that may lead to prediction
219
+
220
+ Table 3: 2D detectors comparison on Structure3D dataset.
221
+
222
+ <table><tr><td>Method</td><td>IoU(%)↑</td><td>PE(%) ↓</td><td>EE↓</td><td>RMSE↓</td></tr><tr><td>Planar R-CNN (Liu et al., 2019)</td><td>79.64</td><td>7.04</td><td>6.58</td><td>0.4013</td></tr><tr><td>Rac (Stekovic et al., 2020)</td><td>76.29</td><td>8.07</td><td>7.19</td><td>0.3865</td></tr><tr><td>Noncuboid (Yang et al., 2022)</td><td>79.94</td><td>6.40</td><td>6.80</td><td>0.2827</td></tr><tr><td>Noncuboid (re-trained)</td><td>80.18</td><td>6.13</td><td>6.41</td><td>0.2631</td></tr></table>
223
+
224
+ ![](images/58e27c78be09726dd8ad3849b0e834a618858238f4315812309382a174109227.jpg)
225
+ Figure 8: Qualitative comparison on in-the-wild data (Zhou et al., 2018). From left to right: (1-3) three input views of the scene, (4) layout reconstruction by baseline method (Noncuboid+MASt3R) shown in wireframe only, and (5) our method's result visualized with predicted point clouds.
226
+
227
+ failures. Instead, we utilize CAD-Estate dataset, which is derived from RealEstate10K with additional room layout annotations, as a more suitable benchmark for comparison. The results of parts (a) and (b) on three datasets show the advancements of MASt3R, not only in traditional multi-view datasets (RealEstate10K,CAD-Estate), but also in sparse-view dataset (Structure3D). Plane-DUSt3R could get a better performance compared to the previous SOTA MASt3R. One arguable point is that Plane-DUSt3R is obviously better since it is fine-tuned on Strucutre3D. That is the message we want to convey. DUSt3R/MASt3R are the SOTAs in both multi-view and saprse-view camera pose estimation tasks. After our improvements (section 3.2) and fine-tuning, Plane-DUSt3R could get 5.33 points better on the sparse-view layout dataset.
228
+
229
+ Comparison of 2D detectors $(f_{1})$ . We retrain the Noncuboid method with a more thorough hyperparameter grid search, resulting in an improved version. Table 3 compares its results with other baseline methods from Yang et al. (2022).
230
+
231
+ Comparison of various input views. We experiment the impact of different number of input views in Appendix C.2. The results in Table 5 show a general improvement trend as the number of views increases.
232
+
233
+ # 4.3 GENERALIZABILITY TO UNKNOWN AND OUT-OF-DOMAIN DATA
234
+
235
+ Figure 1 and 12 also demonstrate the generalizability of our pipeline. It not only performs well on the testing set of Structure3D (Figure 6), but also generalizes well to new datasets, such as RealEstate10K (Figure 8 shows examples from this dataset). Furthermore, our pipeline proves effective even with data in the wild as shown in appendix in Figure 11,12. We also experiment our pipeline on the dataset CAD-estate (see Appendix E for details).
236
+
237
+ # 5 CONCLUSION
238
+
239
+ This paper introduces the first pipeline for multi-view layout estimation, even in sparse-view settings. The proposed pipeline encompasses three components: a 2D plane detector, a 3D information prediction and correspondence establishment method, and a post-processing algorithm. As the first comprehensive approach to the multi-view layout estimation task, this paper provides a detailed analysis and formulates the problem under both single-view and multi-view settings. Additionally, we design several baseline methods for comparison to validate the effectiveness of our pipeline. Our approach consistently outperforms the baselines on both 2D projection and 3D metrics. Furthermore, our pipeline not only performs well on the synthetic Structure3D dataset, but generalizes effectively to in-the-wild datasets and scenarios with different image styles such as the cartoon style.
240
+
241
+ # ACKNOWLEDGMENTS
242
+
243
+ This work is partially supported by the Hong Kong Center for Construction Robotics (Hong Kong ITC-sponsored InnoHK center), the National Natural Science Foundation of China (Grant No. 62306261), CUHK Direct Grants (Grant No. 4055190), and The Shun Hing Institute of Advanced Engineering (SHIAE) No. 8115074.
244
+
245
+ # REFERENCES
246
+
247
+ Armen Avetisyan, Christopher Xie, Henry Howard-Jenkins, Tsun-Yi Yang, Samir Aroudj, Suvam Patra, Fuyang Zhang, Duncan Frost, Luke Holland, Campbell Orme, et al. Scenescipt: Reconstructing scenes with an autoregressive structured language model. arXiv preprint arXiv:2403.13064, 2024.
248
+ Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. URL https://openai.com/research/video-generation-models-as-world-simulators.
249
+ Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
250
+ Xili Dai, Haigang Gong, Shuai Wu, Xiaojun Yuan, and Yi Ma. Fully convolutional line parsing. Neurocomputing, 506:1-11, 2022.
251
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
252
+ Varsha Hedau, Derek Hoiem, and David Forsyth. Recovering the spatial layout of cluttered rooms. In 2009 IEEE 12th international conference on computer vision, pp. 1849-1856. IEEE, 2009.
253
+ Henry Howard-Jenkins, Shuda Li, and Victor Prisacariu. Thinking outside the box: Generation of unconstrained 3d room layouts. In Computer Vision-ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2-6, 2018, Revised Selected Papers, Part I 14, pp. 432-448. Springer, 2019.
254
+ Zhihua Hu, Bo Duan, Yanfeng Zhang, Mingwei Sun, and Jingwei Huang. Mvlayoutnet: 3d layout reconstruction with multi-view panoramas. In Proceedings of the 30th ACM International Conference on Multimedia, pp. 1289-1298, 2022.
255
+ Siyuan Huang, Siyuan Qi, Yixin Zhu, Yinxue Xiao, Yuanlu Xu, and Song-Chun Zhu. Holistic 3d scene parsing and reconstruction from a single rgb image. In Proceedings of the European conference on computer vision (ECCV), pp. 187-203, 2018.
256
+ Linyi Jin, Shengyi Qian, Andrew Owens, and David F Fouhey. Planar surface reconstruction from sparse views. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12991-13000, 2021.
257
+ Vincent Leroy, Yohann Cabon, and Jérôme Revaud. Grounding image matching in 3d with mast3r. arXiv preprint arXiv:2406.09756, 2024.
258
+ Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, and Yasutaka Furukawa. Planenet: Piecewise planar reconstruction from a single rgb image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2579-2588, 2018.
259
+ Chen Liu, Kihwan Kim, Jinwei Gu, Yasutaka Furukawa, and Jan Kautz. Planercnn: 3d plane detection and reconstruction from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4450-4459, 2019.
260
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
261
+
262
+ Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016.
263
+ Yinyu Nie, Xiaoguang Han, Shihui Guo, Yujuan Zheng, Jian Chang, and Jian Jun Zhang. Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 55-64, 2020.
264
+ Rémi Pautrat, Juan-Ting Lin, Viktor Larsson, Martin R Oswald, and Marc Pollefeys. Sold2: Self-supervised occlusion-aware line description and detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11368-11378, 2021.
265
+ René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 12179-12188, 2021.
266
+ Denys Rozumnyi, Stefan Popov, Kevis-Kokitsi Maninis, Matthias Nießner, and Vittorio Ferrari. Estimating generic 3d room structures from 2d annotations, 2023. URL https://arxiv.org/abs/2306.09077.
267
+ Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
268
+ Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In European Conference on Computer Vision (ECCV), 2016.
269
+ Sinisa Stekovic, Shreyas Hampali, Mahdi Rad, Sayan Deb Sarkar, Friedrich Fraundorfer, and Vincent Lepetit. General 3d room layout from a single view by render-and-compare. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVI 16, pp. 187-203. Springer, 2020.
270
+ Cheng Sun, Chi-Wei Hsiao, Min Sun, and Hwann-Tzong Chen. Horizonnet: Learning room layout with 1d representation and pano stretch data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1047–1056, 2019.
271
+ Cheng Sun, Min Sun, and Hwann-Tzong Chen. Hohonet: 360 indoor holistic understanding with latent horizontal features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2573-2582, 2021.
272
+ Haiyan Wang, Will Hutchcroft, Yuguang Li, Zhiqiang Wan, Ivaylo Boyadzhiev, Yingli Tian, and Sing Bing Kang. Psmnet: Position-aware stereo merging network for room layout estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8616-8625, 2022.
273
+ Jingdong Wang, Ke Sun, Tianheng Cheng, Borui Jiang, Chaorui Deng, Yang Zhao, Dong Liu, Yadong Mu, Mingkui Tan, Xinggang Wang, et al. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43 (10):3349-3364, 2020.
274
+ Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20697-20709, 2024.
275
+ Ethan Weber, Riley Peterlinz, Rohan Mathur, Frederik Warburg, Alexei A. Efros, and Angjoo Kanazawa. Toon3d: Seeing cartoons from a new perspective. In arXiv, 2024.
276
+ Cheng Yang, Jia Zheng, Xili Dai, Rui Tang, Yi Ma, and Xiaojun Yuan. Learning to reconstruct 3d non-cuboid room layout from a single rgb image. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2534-2543, 2022.
277
+
278
+ Shang-Ta Yang, Fu-En Wang, Chi-Han Peng, Peter Wonka, Min Sun, and Hung-Kuo Chu. Dula-net: A dual-projection network for estimating room layouts from a single rgb panorama. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3363-3372, 2019.
279
+ Zehao Yu, Jia Zheng, Dongze Lian, Zihan Zhou, and Shenghua Gao. Single-image piece-wise planar 3d reconstruction via associative embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1029-1037, 2019.
280
+ Yuanwen Yue, Theodora Kontogianni, Konrad Schindler, and Francis Engelmann. Connecting the dots: Floorplan reconstruction using two-level queries. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 845-854, 2023.
281
+ Yinda Zhang, Fisher Yu, Shuran Song, Pingmei Xu, Ari Seff, and Jianxiong Xiao. Large-scale scene understanding challenge: Room layout estimation. In CVPR Workshop, volume 3, 2015.
282
+ Jia Zheng, Junfei Zhang, Jing Li, Rui Tang, Shenghua Gao, and Zihan Zhou. Structured3d: A large photo-realistic dataset for structured 3d modeling. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IX 16, pp. 519-535. Springer, 2020.
283
+ Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. arXiv preprint arXiv:1805.09817, 2018.
284
+ Yichao Zhou, Haozhi Qi, and Yi Ma. End-to-end wireframe parsing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 962-971, 2019.
285
+ Chuhang Zou, Alex Colburn, Qi Shan, and Derek Hoiem. Layoutnet: Reconstructing the 3d room layout from a single rgb image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2051-2059, 2018.
286
+
287
+ # A DUST3R DETAILS
288
+
289
+ Given a set of RGB images $\{I_1, I_2, \ldots, I_n\} \in \mathbb{R}^{H \times W \times 3}$ , we first pair them to create a set of image pairs $\mathbb{P} = \{(I_i, I_j) \mid i \neq j, 1 \leq i, j \leq n\}$ . For each image pair $(I_i, I_j) \in \mathbb{P}$ , the model estimates two point maps $X_{i,i}, X_{j,i}$ , along with their corresponding confidence maps $C_{i,i}, C_{j,i}$ . Specifically, both pointmaps are expressed in the camera coordinate system of $I_i$ , which implicitly accomplishes dense 3D reconstruction.
290
+
291
+ The model consists of two parallel branches, as shown in Fig 3, each branch responsible for processing one image. The two images are first encoded in a Siamese manner with weight-sharing ViT encoder(Dosovitskiy et al., 2020) to produce two latent features $F_{1}, F_{2}$ : $F_{i} = \operatorname{Encoder}(I_{i})$ . Next, $F_{1}, F_{2}$ are fed into two identical decoders that continuously share information through cross-attention mechanisms. By leveraging cross-attention mechanisms, the model is able to learn the relative geometric relationships between the two images. Specifically, for each encoder block:
292
+
293
+ $$
294
+ \begin{array}{l} \boldsymbol {G} _ {1, i} = \operatorname {D e c o d e r B l o c k} _ {1, i} \left(\boldsymbol {G} _ {1, i - 1}, \boldsymbol {G} _ {2, i - 1}\right), \\ \boldsymbol {G} _ {2, i} = \operatorname {D e c o d e r B l o c k} _ {2, i} \left(\boldsymbol {G} _ {1, i - 1}, \boldsymbol {G} _ {2, i - 1}\right) \\ \end{array}
295
+ $$
296
+
297
+ where $G_{1,0} \coloneqq F_1, G_{2,0} \coloneqq F_2$ . Finally, the DPT(Ranftl et al., 2021) head regresses the pointmap and confidence map from the concatenated features of different layers of the decoder tokens:
298
+
299
+ $$
300
+ \begin{array}{l} \boldsymbol {X} _ {1, 1}, \boldsymbol {C} _ {1, 1} = \operatorname {H e a d} _ {1} \left(\boldsymbol {G} _ {1, 0}, \boldsymbol {G} _ {1, 1}, \dots , \boldsymbol {G} _ {1, B}\right) \\ \boldsymbol {X} _ {2, 1}, \boldsymbol {C} _ {2, 1} = \operatorname {H e a d} _ {2} \left(\boldsymbol {G} _ {2, 0}, \boldsymbol {G} _ {2, 1}, \dots , \boldsymbol {G} _ {2, B}\right) \\ \end{array}
301
+ $$
302
+
303
+ where $B$ is the number of decoder blocks. The regression loss function is defined as the scale-invariant Euclidean distance between the normalized predicted and ground-truth pointmaps:
304
+
305
+ $$
306
+ l _ {r e g r} (v, i) = \left\| \frac {1}{z} \boldsymbol {X} _ {i} ^ {v, 1} - \frac {1}{\bar {z}} \bar {\boldsymbol {X}} _ {i} ^ {v, 1} \right\| _ {2} ^ {2} \tag {6}
307
+ $$
308
+
309
+ where $v \in \{1, 2\}$ and $i$ is the pixel index. The scaling factors $z$ and $\bar{z}$ represent the average distance of all corresponding valid points to the origin. The original DUSt3R couldn't guarantee output at a metric scale, so we also trained a modified version of Plane-DUSt3R that produces metric-scale results. The key change we made was setting $z := \bar{z}$ . By introducing the regression loss in confidence loss, the model could implicitly learn how to identify regions that are more challenging to predict compared to others. Same as in DUSt3R (Wang et al., 2024):
310
+
311
+ $$
312
+ \mathcal {L} _ {\text {c o n f}} = \sum_ {v \in \{1, 2 \}} \sum_ {i \in \mathcal {D} ^ {v}} C _ {i} ^ {v, 1} \ell_ {\text {r e g r}} (v, i) - \alpha \log C _ {i} ^ {v, 1} \tag {7}
313
+ $$
314
+
315
+ To obtain the ground-truth pointmaps $\pmb{X}^{v,1}$ , we first transform the ground truth depthmap $\pmb{D} \in \mathbb{R}^{H \times W}$ into a pointmap $\pmb{X}^v$ express in the camera coordinate of $v$ by $\pmb{X}_{i,j}^v = \pmb{K}^{-1}[i\pmb{D}_{i,j}, j\pmb{D}_{i,j}, \pmb{D}_{i,j}]^\top$ with camera intrinsic matrix $K \in \mathbb{R}^{3 \times 3}$ . Then we obtain $\pmb{X}^{v,1}$ by $\pmb{X}^{v,1} = \pmb{T}_1^{-1}\pmb{T}_v h(\pmb{X}^v)$ with $T_1, T_v \in \mathbb{R}^{3 \times 4}$ the camera-to-world poses and $h$ being the homogeneous transformation.
316
+
317
+ Global Alignment For global alignment, we aim to assign a global pointmap and camera pose for each image. First, the average confidence scores of each pair of images are utilized as the similarity scores. A higher value of confidence implies a stronger visual similarity between the two images. These scores are employed to construct a Minimum Spanning Tree, denoted as $\mathcal{G}(\mathcal{V},\mathcal{E})$ where each vertex $\mathcal{V}$ corresponding to an image in the input set and each edge $e = (n,m)\in \mathcal{E}$ indicates that images $I_{n}$ and $I_{m}$ share significant visual content. We aim to find globally aligned point maps $\{\chi^n\in \mathbb{R}^{H\times W\times 3}\}$ and a transformation $T_{i}\in \mathbb{R}^{3\times 4}$ than transform the prediction into the world coordinate frame. To do this, for each image pair $e = (n,m)\in \mathcal{E}$ we have two point maps $X^{n,n},X^{m,n}$ and their confidence maps $C^{n,n},C^{m,n}$ . For simplicity, we use the annotation $X^{n,e}\coloneqq X^{n,n},X^{m,e}\coloneqq X^{m,n}$ . Since $X^{n,e}$ and $X^{m,e}$ are in the same coordinate frame, $T_{e}\coloneqq T_{n}$ should align both point maps with the world-coordinate. We then solve the following optimization problem:
318
+
319
+ $$
320
+ \chi^ {*} = \arg \min _ {\chi , T, \sigma} \sum_ {e \in \mathcal {E}} \sum_ {v \in e} \sum_ {i = 1} ^ {H W} C _ {i} ^ {v, e} \| \chi_ {i} ^ {v} - \sigma_ {e} T _ {e} X _ {i} ^ {v, e} \| _ {2} ^ {2}. \tag {8}
321
+ $$
322
+
323
+ where $v \in e$ means $v$ can be either $n$ or $m$ for the pair $e$ and $\sigma_e$ is a positive scaling factor. To avoid the trivial solution where $\sigma_e = 0$ , we ensure that $\prod_{e} \sigma_{e} = 1$
324
+
325
+ # B $f_{3}$ ALGORITHM
326
+
327
+ The goal of multi-view layout estimation is similar to that of single-view: we need to estimate 3D parameters for each plane and determine the relationships between adjacent planes. However, in a multi-view setting, we must ensure that each plane represents a unique physical plane in 3D space. The main challenge in multi-view reconstruction is that the same physical plane may appear in multiple images, causing duplication. Our task is to identify which planes correspond to the same physical plane across different images and merge them, keeping only one representation for each unique plane.
328
+
329
+ Since we allow at most one floor and one ceiling detection per image, we simply average the parameters from all images to obtain the final floor and ceiling parameters. As for walls, we assume all walls are perpendicular to both the floor and ceiling. To simplify the merging process, we project all walls onto the x-z plane defined by the floor and ceiling. This projection reduces the problem to a 2D space, making it easier to identify and merge corresponding walls. Figure 5 illustrates the entire process of merging walls. Each wall in an image is denoted as one line segment, as shown in Figure 5a. We then rotate the scene so that all line segments are approximately horizontal or vertical, as depicted in 5b. In Figure 5c, each line segment is classified and further rotated to be either horizontal or vertical, based on the assumption that all adjacent walls are perpendicular to each other.
330
+
331
+ Algorithm 1 Merge Plane
332
+ Require: vertical lines, horizontal lines
333
+ 1: Sort verticalLines by x-axis value
334
+ 2: Initialize clusters with the first segment.
335
+ 3: for each segment in verticalLines[1,:] do
336
+ 4: found ← False
337
+ 5: for each cluster in clusters do
338
+ 6: if lines(image_id in cluster(image_id then
339
+ 7: continue
340
+ 8: end if
341
+ 9: if distance(line, clustercentroid) < proximity_threshold then
342
+ 10: if overlap(line, clustercentroid) > overlap_threshold then
343
+ 11: Insert line into cluster
344
+ 12: found ← True
345
+ 13: break
346
+ 14: end if
347
+ 15: if not intersect(line, cluster, horizontalLines, margin) then
348
+ 16: Insert line into cluster
349
+ 17: found ← True break
350
+ 18: end if
351
+ 19: end if
352
+ 20: end for
353
+ 21: end for
354
+ 22: if found == False then
355
+ 23: Create a new cluster with line
356
+ 24: Append the new cluster to clusters
357
+ 25: end if
358
+ Ensure: Clusters
359
+
360
+ # C ADDITIONAL QUANTITATIVE RESULTS
361
+
362
+ # C.1 PERFORMANCE UNDER VARYING THRESHOLDS
363
+
364
+ To give a more complete view of the performance of the Plane-DUS3R method, we present the results under various threshold settings (the threshold of both camera's translation and rotation) in Table 4.
365
+
366
+ Table 4: Quantitative results with different thresholds on Structure3D dataset.
367
+
368
+ <table><tr><td>Threshold(Translation &amp; Rotation)</td><td>3D-precision(%)↑</td><td>3D-recall(%)↑</td></tr><tr><td>0.1m, 5°</td><td>34.11</td><td>31.66</td></tr><tr><td>0.15m, 10°</td><td>52.63</td><td>48.37</td></tr><tr><td>0.2m, 15°</td><td>64.64</td><td>59.53</td></tr><tr><td>0.4m, 30°</td><td>82.75</td><td>76.13</td></tr></table>
369
+
370
+ Table 5: Quantitative results with different input views on Structure3D dataset.
371
+
372
+ <table><tr><td>Input views</td><td>re-IoU(%)↑</td><td>re-PE(%) ↓</td><td>re-EE↓</td><td>re-RMSE↓</td><td>3D-precision(%)↑</td><td>3D-recall(%)↑</td></tr><tr><td>2</td><td>75.02</td><td>8.72</td><td>8.70</td><td>0.4148</td><td>53.19</td><td>42.60</td></tr><tr><td>3</td><td>75.29</td><td>8.53</td><td>8.56</td><td>0.3596</td><td>54.43</td><td>47.97</td></tr><tr><td>4</td><td>75.55</td><td>8.39</td><td>8.55</td><td>0.3463</td><td>54.91</td><td>49.44</td></tr><tr><td>5</td><td>75.57</td><td>8.35</td><td>8.59</td><td>0.3422</td><td>55.02</td><td>49.59</td></tr></table>
373
+
374
+ # C.2 PERFORMANCE UNDER INPUT VIEWS
375
+
376
+ The impact of varying input views on performance is presented in Table 5. For each input views, we select this number of views from the 5 views as input and only test on rooms that have all 5 views to eliminate potential bias from room complexity variations. The results show a general improvement trend as the number of views increases.
377
+
378
+ # D ADDITIONAL QUALITATIVE RESULTS
379
+
380
+ Figure 9 showcases more visualization of our method on the Structured3D dataset, while Figure 10 presents failed cases. To demonstrate real-world applicability, we present results on in-the-wild images in Figure 11 and Figure 12.
381
+
382
+ # E EVALUATION RESULT ON CAD-ESTATE DATASET
383
+
384
+ We conducted an additional evaluation on the CAD-Estate dataset Rozumnyi et al. (2023). CAD-Estate is derived from RealEstate10K datasetZhou et al. (2018) and provides generic 3D room layouts from 2D segmentation masks. Due to differences in annotation standards between CAD-Estate and Structured3D, we selected a subset of the original data that aligns with our experimental setup. Our method and Structured3D assume a single floor, single ceiling, and multiple walls configuration. In contrast, CAD-Estate includes scenarios with multiple ceilings (particularly in attic rooms) and interconnected rooms through open doorways, whereas Structured3D treats doorways as complete walls. To ensure a fair comparison, we sampled 100 scenes containing 469 images that closely match Structure3D's annotation style. Each scene contains 2 to 10 images.
385
+
386
+ Since CAD-Estate only provides 2D segmentation annotations without 3D information, we report performance using 2D metrics: IoU and pixel error. While CAD-Estate's label classes include ["ignore", "wall", "floor", "ceiling", "slanted"], we only focus on wall, floor, and ceiling classes. We utilize the dataset's provided intrinsic parameters for reprojection during evaluation. Results are reported for both "Noncuboid + GT pose" and "Plane-DUSt3R (metric)" in Table 6. We visualize our results in Figure 13 and Figure 14
387
+
388
+ Table 6: Quantitative results with on CAD-estate dataset.
389
+
390
+ <table><tr><td>Method</td><td>re-IoU(%)↑</td><td>re-PE(%) ↓</td></tr><tr><td>Noncuboid + GT pose</td><td>55.99</td><td>20.33</td></tr><tr><td>Ours (metric)</td><td>63.14</td><td>15.15</td></tr></table>
391
+
392
+ ![](images/e8bf44c554e10a4d7bca472883c671d85917db7703199421a4e9c4de43e9c2f5.jpg)
393
+ Figure 9: Qualitative results on Structure3D testing set. The 5-th column is our result visualized with pointcloud, the last column is the result shown in pure wireframe
394
+
395
+ ![](images/2a64aaa54d69fa5757bd8304e861531c0facc0901ee5f3cdf174f733f57315b3.jpg)
396
+ Figure 10: Failed case on Structure3D testing set. The first 4 columns are input views, the 5-th column is our result visualized with pointcloud, the last column is the result shown in pure wireframe.
397
+
398
+ ![](images/06564e1ec2676b04c1fc0b7eb1edeac8e6077cb2f5f6cfc2241db61c29d716ba.jpg)
399
+ Figure 11: We provide qualitative results on in-the-wild data.
400
+
401
+ ![](images/539cee9f2272f4dc586b0df1e48b8eafade49c89acb5aed453befab3b91341b9.jpg)
402
+
403
+ ![](images/423b74133c3735afee60f3ae19a9e7811311e25283897cb9c92bebf12b5e877f.jpg)
404
+
405
+ ![](images/7847c370d42e79f6afaa20560136386a9da63d1333a64e1dab5522d1f25aab64.jpg)
406
+ Figure 12: We provide qualitative results on out of domain cartoon data (Weber et al., 2024).
407
+
408
+ ![](images/be6e5b0482a1d4c2ebe78eb5d66233d3f386a7c3e9f2f3ed3d02a7746a5f6932.jpg)
409
+
410
+ ![](images/8d0bce755d65c0278911b3971783f607b911b2862536274053e012b87a01d8f5.jpg)
411
+
412
+ ![](images/d2bb9536cf59617af6513f9edd08309eb9fb46142988c8a4d85613b3589ca735.jpg)
413
+ Figure 13: Visualization of results from the CAD-Estate dataset. (a) Input views are shown in the top row, followed by CAD-Estate's ground-truth segmentation in the middle row, and our predicted segmentation in the bottom row. (b) Our 3D reconstruction results displayed with point clouds (top row) and wireframe renderings (bottom row).
414
+
415
+ ![](images/c961511e691cb6db084c1ce8b881bc229eaff1b3bd34bea277f15320afa6ef60.jpg)
416
+ (a)
417
+
418
+ ![](images/7fb42b98b5a2559e0840e6fc48f4a2b8362ac5f3c003edab2d5f17b82ba13582.jpg)
419
+
420
+ ![](images/25b5a141de38b8649d73077b82947108ef82887c007edb8eee9e49406dad7d29.jpg)
421
+
422
+ ![](images/06d14a73e29efd7249d0fc88b526ff6be4449a0888fcebe2896297ad344c6262.jpg)
423
+ (b)
424
+
425
+ ![](images/b468cf0365a79066affb5f5d494611766a0a1c4c2ad954e8a8abaae0431fec3a.jpg)
426
+ Figure 14: Visualization of results from the CAD-Estate dataset. (a) Input views are shown in the top row, followed by CAD-Estate's ground-truth segmentation in the middle row, and our predicted segmentation in the bottom row. (b) Our 3D reconstruction results displayed with point clouds (top row) and wireframe renderings (bottom row).
427
+
428
+ ![](images/f1470262691e99d63ba582e926ac930e88916cdd78fdd84fbd6035f699063f84.jpg)
429
+ (a)
430
+
431
+ ![](images/81444cb02bf717d5a55ff265baad6d31d477f3f3adbc3b9e6ac9deef3428a704.jpg)
432
+
433
+ ![](images/23241f85b71e14e2478df0a335484fed419d069aed63aecaacea2aebdb605945.jpg)
434
+
435
+ ![](images/e324d729cebe7e037dede91950264d191db7b39f16b4455d92b443f9c3ad6d61.jpg)
436
+
437
+ ![](images/fed1f484d6173f1a61ad73a32f0cfadb0244da9b8a52a39165bf80c0c4cf3dc2.jpg)
438
+ (b)
2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d9db695c2d84c3c44bdc056fedeef1df5cf0bfe953bc7632d1a8511a5b451db
3
+ size 904778
2025/Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/c2d25ae0-a049-4a50-b7c8-27c324a76f6b_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/c2d25ae0-a049-4a50-b7c8-27c324a76f6b_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/c2d25ae0-a049-4a50-b7c8-27c324a76f6b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c3441dd9967d67573aa9b9b395161e89c73cf8371a5704812115099721a2d8f
3
+ size 34147419
2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/full.md ADDED
@@ -0,0 +1,477 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # UNSUPERVISED DISENTANGLEMENT OF CONTENT AND STYLE VIA VARIANCE-INVARIANCE CONSTRAINTS
2
+
3
+ Yuxuan Wu<sup>b</sup> Ziyu Wang<sup>ab</sup> Bhiksha Raj<sup>ab</sup> Gus Xia<sup>b</sup>
4
+
5
+ b Mohamed bin Zayed University of Artificial Intelligence
6
+ New York University Shanghai
7
+ Carnegie Mellon University
8
+
9
+ # ABSTRACT
10
+
11
+ We contribute an unsupervised method that effectively learns disentangled content and style representations from sequences of observations. Unlike most disentangle-ment algorithms that rely on domain-specific labels or knowledge, our method is based on the insight of domain-general statistical differences between content and style — content varies more among different fragments within a sample but maintains an invariant vocabulary across data samples, whereas style remains relatively invariant within a sample but exhibits more significant variation across different samples. We integrate such inductive bias into an encoder-decoder architecture and name our method after V3 (variance-versus-invariance). Experimental results show that V3 generalizes across multiple domains and modalities, successfully learning disentangled content and style representations, such as pitch and timbre from music audio, digit and color from images of hand-written digits, and action and character appearance from simple animations. V3 demonstrates strong disentanglement performance compared to existing unsupervised methods, along with superior out-of-distribution generalization under few-shot adaptation compared to supervised counterparts. Lastly, symbolic-level interpretability emerges in the learned content codebook, forging a near one-to-one alignment between machine representation and human knowledge.<sup>1</sup>
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Learning abstract concepts is an essential part of human intelligence. Even without any label supervision, we humans can abstract rich observations with great variety into a category, and such capability generalizes across different domains and modalities. For example, we can effortlessly perceive a picture of a "cat" captured at any angle or set against any background, we can perceive the symbolic number "8" from an image irrespective of its color or writing style variations, and we can perceive an abstract pitch class "A" from an acoustic signal regardless of its timbre. These concepts form the fundamental vocabulary of our languages—be they natural, mathematical, or musical—and underpin effective and interpretable communication in everyday life.
16
+
17
+ Our goal is to emulate such abstraction capability using machine learning. We choose a content-style representation disentanglement approach as we believe that representation disentanglement offers a more complete picture of abstraction—concepts that matter more in communication, such as an “8” in a written phone number or a note pitch “A” in a folk song, are usually perceived as content, while the associated variations that often matter less in context, such as the written style of a digit or the singing style of a song, are perceived as style. In addition, content is usually symbolized and associated with rigid labels, as we need precise control over it during communication. E.g., to write “8” as “9” in a phone number or to sing an “A” as “B” in a performance can be a fatal error. In comparison, though style can also be described discretely, such as an “italic” writing or a “tenor” voice, a variation over it is usually much more tolerable.
18
+
19
+ ![](images/85f6fb8566f8796cc983f9dfa64ce401d9b65340f58c9ef2a80181d3396e3612.jpg)
20
+ Figure 1: An illustration of the variance-versus-invariance constraints in content and style. Here, content refers to the symbols. Each row represents a data sample, which is divided into multiple fragments along columns. Each fragment contains one content-style pair. For example, digits (e.g., 9, 8, 6) can represent content, while colors (e.g., orange, brown, teal) represent style.
21
+
22
+ In the machine learning literature, significant progress has recently been made in content-style disentanglement for various tasks, including disentangling objects from backgrounds (Hong et al., 2023), characters from fonts (Liu et al., 2018; Xie et al., 2021), pitch from timbre (Lu et al., 2019; Bonnici et al., 2022), and phonemes from speaker identity (Qian et al., 2019; Li et al., 2021). However, most existing models are either limited to specific domains (Yingzhen and Mandt, 2018; Bai et al., 2021; Luo et al., 2022) or rely heavily on domain-specific knowledge as implicit supervision. The supervision forms can be explicit content or style labels (Liu et al., 2017; Zhu et al., 2017; Park et al., 2020; Karras et al., 2019; Choi et al., 2020; Bonnici et al., 2022; Patashnik et al., 2021; Kwon and Ye, 2022), pre-trained content or style representations (Qian et al., 2020b; 2019), or paired data showcasing the same content rendered in different styles or vice versa (Isola et al., 2017; Sangkloy et al., 2017). In addition, the disentangled representations often fell short in generalizing to new contents or styles, and they lack interpretability at a symbolic level and do not align well with human perceptions (Zhang et al., 2021; Nauta et al., 2023).
23
+
24
+ To address the aforementioned challenges, achieving more generalizable and interpretable disentanglement in an unsupervised manner, we introduce V3 (variance-versus-invariance). V3 disentangles content and style by leveraging meta-level prior knowledge about their inherent statistical differences. As shown in Figure 1, our design principle is based on the observation that content and style display distinct patterns of variation—content undergoes frequent changes within different fragments of a sample yet maintains a consistent vocabulary across data samples, whereas style remains relatively stable within a sample but exhibits more significant variation across different samples.
25
+
26
+ In this paper, we adopt the vector-quantized autoencoder architecture and incorporate variance-versus-invariance constraints to guide the learning of latent representations that capture content-style distinctions. We demonstrate that V3 effectively generalizes across distinct areas: disentangling pitches and timbres from musical data, disentangling numbers and ink colors from images of digits, and disentangling character actions and appearances from game video clips. Experimental results show that our approach achieves more robust content-style disentanglement than unsupervised baselines, and outperforms even supervised methods in out-of-distribution (OOD) generalization and few-shot learning for discriminative tasks. Lastly, symbolic-level interpretability emerges with a near one-to-one alignment between the vector-quantized codebook and human knowledge, an outcome not yet seen in previous studies. In summary, our contributions are as follows:
27
+
28
+ - Unsupervised content-style disentanglement: We introduce V3, an unsupervised method leveraging meta-level inductive bias to disentangle content and style representations, without requiring paired data, content or style labels, or domain-specific assumptions.
29
+ - Out-of-distribution generalization: As a result of successful content-style disentanglement, V3 shows better out-of-distribution generalization capabilities compared to supervised methods in few-shot settings, that is, recognizing content when presented with only a few examples of unseen styles.
30
+
31
+ - Emergence of interpretable symbols: Given the availability of semantic segmentations, V3 can foster the development of interpretable content symbols that closely align with human knowledge.
32
+
33
+ # 2 RELATED WORK
34
+
35
+ The content-style disentanglement as well as the related style transfer problem has been well explored in computer vision, especially in the context of image-to-image translation. Early works mostly require paired data of the same content with different styles (Isola et al., 2017; Sangkloy et al., 2017), until the introduction of domain transfer networks that can learn style transfer functions without paired data (Zhu et al., 2017; Liu et al., 2017; Taigman et al., 2016; Bousmalis et al., 2017; Park et al., 2020; Choi et al., 2018; 2020; Karras et al., 2019; Xie et al., 2022a;b). Although these methods are unsupervised in the sense that they do not require paired data, they still require concrete labels of styles to identify source and target domains, and there are no fully interpretable representations of either content or style.
36
+
37
+ A similar trajectory of research has also been followed in other domains including speech Qian et al. (2019); Kameoka et al. (2018); Kaneko et al. (2019); Wu et al. (2023) and music Lu et al. (2019); Bonnici et al. (2022); Luo et al. (2022); Lin et al. (2023); Zhang et al. (2024); Lin et al. (2021). To mitigate the requirement for supervision, some methods utilize domain-specific knowledge and have achieved better disentanglement results, including X-vectors of speakers Qian et al. (2019; 2020a), the close relation between fundamental frequency and content in audio Qian et al. (2020a;b), or pre-defined style or content representations Yang et al. (2019); Wang et al. (2020; 2022).
38
+
39
+ Pure unsupervised learning for content and style disentanglement has not been well explored. Notable attempts include adversarial training-based methods (Chen et al., 2016; Ren et al., 2021), mutual information neural estimation (MINE) (Belghazi et al., 2018; Poole et al., 2019; Tjandra et al., 2020a; Zhang and Dixon, 2023), and low-dimensional representation learning with physical symmetry (Liu et al., 2023). But these methods often suffer from the training stability issue or have not been proven to be domain-general. Other unsupervised methods on sequential data, such as Disentangled Sequential Autoencoder (DSAE) and its variants, leverage the nature of content and style to learn their representations at different scales, but their applications are limited to purely sequential data with a static style (Hsu et al., 2017; Yingzhen and Mandt, 2018; Bai et al., 2021; Luo et al., 2022; 2024; Yin et al., 2022).
40
+
41
+ A technique often associated with learned content is vector quantization (VQ) (Van Den Oord et al., 2017). Recent efforts have built language models on top of VQ codes for long-term generation, indicating the association between VQ codebook and the underlying information content (Yan et al., 2021; Tan et al., 2021; Copet et al., 2024; Garcia et al., 2023; Tjandra et al., 2020a,b; Vali and Backström, 2023). A noticeable characteristic of these studies is the use of large codebooks, which limits the interpretability of representations. We borrow the idea of a small codebook size from categorical representations (Chen et al., 2016; Ji et al., 2019), targeting a more concise and unified content code across different styles, while keeping the high-dimensional nature of VQ representations.
42
+
43
+ # 3 METHODOLOGY
44
+
45
+ Considering a dataset consisting of $N$ data samples, where each sample contains $L$ fragments, we aim to learn each fragment's content and style representation with the inductive bias illustrated in Figure 1. Intuitively, the fragments within each data sample have a relatively frequently-changing content and a relatively stable style. For different data samples, their style exhibits significant variations and their content more or less keeps a consistent vocabulary. In the following, we first introduce the autoencoder architecture V3 is built upon, then the variability statistics to quantify the changing patterns of content and style, and the proposed variance-versus-invariance constraints.
46
+
47
+ # 3.1 MODEL ARCHITECTURE
48
+
49
+ The model architecture of V3 is illustrated in Figure 2. Let $X = \{x_{ij}\}_{N\times L}$ be the dataset, where $x_{ij}$ corresponds to the $j$ -th fragment of the $i$ -th sample. We use an autoencoder architecture to learn the representations of $x_{ij}$ . The encoder encodes the input data $x_{ij}$ to the latent space, which is split
50
+
51
+ ![](images/8992734c90a9f1dceb10f9228b71d42146e91ebb9f30ad5e907cb45b8fa8b800.jpg)
52
+ Figure 2: The model architecture of V3. Left: The autoencoder has two branches for content and style respectively, where the content branch has a VQ layer at the encoder output. Right: the V3 constraints, where double-dashed arrows represent measuring the variability by $\nu_{k}(\cdot)$ , and solid arrows represent taking the average.
53
+
54
+ into to $z_{ij}^{\mathrm{c}}$ and $z_{ij}^{\mathrm{s}}$ . We use vector quantization as the dictionary learning method for content. Every content representation $z_{ij}^{\mathrm{c}}$ is quantized to the nearest atom in a codebook of size $K$ as $\tilde{z}_{ij}^{\mathrm{c}}$ . The decoder integrates $\tilde{z}_{ij}^{\mathrm{c}}$ and $z_{ij}^{\mathrm{s}}$ and reconstructs the fragment $\hat{x}_{ij}$ . The overall loss function is the weighted sum of three terms:
55
+
56
+ $$
57
+ \mathcal {L} = \mathcal {L} _ {\mathrm {r e c}} + \alpha \mathcal {L} _ {\mathrm {v q}} + \beta \mathcal {L} _ {\mathrm {V 3}}. \tag {1}
58
+ $$
59
+
60
+ Here, $\mathcal{L}_{\mathrm{rec}}$ is the reconstruction loss of $X$ and $\mathcal{L}_{\mathrm{vq}}$ is the VQ commit loss (Van Den Oord et al., 2017):
61
+
62
+ $$
63
+ \mathcal {L} _ {\mathrm {r e c}} = \frac {1}{N \times L} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {L} \| \boldsymbol {x} _ {i j} - \hat {\boldsymbol {x}} _ {i j} \| _ {2}, \tag {2}
64
+ $$
65
+
66
+ $$
67
+ \mathcal {L} _ {\mathrm {v q}} = \frac {1}{N \times L} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {L} \| \boldsymbol {z} _ {i j} ^ {\mathrm {c}} - \operatorname {s g} \left(\tilde {\boldsymbol {z}} _ {i j} ^ {\mathrm {c}}\right) \| _ {2}, \tag {3}
68
+ $$
69
+
70
+ where $\mathrm{sg}(\cdot)$ is the stop gradient operation of the straight-through optimization. The final term $\mathcal{L}_{\mathrm{V3}}$ is the proposed regularization method to ensure unsupervised content-style disentanglement, which we introduce in the rest part of this section. (For more details of the model architecture and data representations, we refer the readers to Appendix B.)
71
+
72
+ # 3.2 VARIABILITY STATISTICS
73
+
74
+ We define four statistics to measure the degree of variability in accordance with the four edges of Figure 1. These statistics are based on a backbone variability measurement $\nu_{k}(\cdot)$ , where $k$ represents the dimension along which variability is computed. In this paper, we define $\nu_{k}(\cdot)$ as the mean pairwise distance (MPD). Formally, for a vector $\mathbf{z}$ of length $D$ ,
75
+
76
+ $$
77
+ \nu_ {i = 1} ^ {D} \left(\boldsymbol {z} _ {i}\right) := \operatorname {M P D} _ {i = 1} ^ {D} \left(\boldsymbol {z} _ {i}\right) = \frac {1}{D (D - 1)} \sum_ {i = 1} ^ {D} \sum_ {j = 1, j \neq i} ^ {D} \| \boldsymbol {z} _ {i} - \boldsymbol {z} _ {j} \| _ {2}. \tag {4}
78
+ $$
79
+
80
+ The motivation for using MPD is that it is more sensitive to multi-peak distributions than standard deviation, which is preferred when learning diverse content symbols in a sample. We compare different choices of $\nu_{k}(\cdot)$ in Appendix D.
81
+
82
+ Content variability within a sample $(\mathcal{V}_{\mathrm{f}}^{\mathrm{c}})$ . We first compute the variability of content along the fragment axis and take the average along the sample axis. The value is the average of content codes before and after vector quantization:
83
+
84
+ $$
85
+ \mathcal {V} _ {\mathrm {f}} ^ {\mathrm {c}} = \frac {1}{2 N} \sum_ {i = 1} ^ {N} \nu_ {j = 1} ^ {L} \left(\tilde {z} _ {i j} ^ {\mathrm {c}}\right) + \frac {1}{2 N} \sum_ {i = 1} ^ {N} \nu_ {j = 1} ^ {L} \left(\tilde {\tilde {z}} _ {i j} ^ {\mathrm {c}}\right). \tag {5}
86
+ $$
87
+
88
+ Content variability across samples $(\mathcal{V}_{\mathrm{s}}^{\mathrm{c}})$ . Theoretically, we aim to measure the consistency of codebook usage distribution along the sample axis, which is not differentiable. In practice, we compute the center of the content code along the fragment axis and measure the variability of the centers along the sample axis. It serves as a proxy of codebook utilization. Also, we consider both content codes before and after vector quantization:
89
+
90
+ $$
91
+ \mathcal {V} _ {\mathrm {s}} ^ {\mathrm {c}} = \frac {1}{2} \nu_ {i = 1} ^ {N} \left(\frac {1}{L} \sum_ {j = 1} ^ {L} z _ {i j} ^ {\mathrm {c}}\right) + \frac {1}{2} \nu_ {i = 1} ^ {N} \left(\frac {1}{L} \sum_ {j = 1} ^ {L} \tilde {z} _ {i j} ^ {\mathrm {c}}\right). \tag {6}
92
+ $$
93
+
94
+ Style variability within a sample $(\mathcal{V}_{\mathrm{f}}^{\mathrm{s}})$ . We compute the variability of style representations among fragments and take its mean across all samples:
95
+
96
+ $$
97
+ \mathcal {V} _ {\mathrm {f}} ^ {\mathrm {s}} = \frac {1}{N} \sum_ {i = 1} ^ {N} \nu_ {j = 1} ^ {L} \left(\boldsymbol {z} _ {i j} ^ {\mathrm {s}}\right). \tag {7}
98
+ $$
99
+
100
+ Style variability across samples $(\mathcal{V}_{\mathrm{s}}^{\mathrm{s}})$ . We compute the average style representation along the fragment axis and measure its variability along the sample axis:
101
+
102
+ $$
103
+ \mathcal {V} _ {\mathrm {s}} ^ {\mathrm {s}} = \nu_ {i = 1} ^ {N} \left(\frac {1}{L} \sum_ {j = 1} ^ {L} z _ {i j} ^ {\mathrm {s}}\right). \tag {8}
104
+ $$
105
+
106
+ # 3.3 VARIANCE-VERSUS-INVARIANCE (V3) CONSTRAINTS
107
+
108
+ With the variability statistics, we can formalize the general relationship between content and style along the sample or fragment axis:
109
+
110
+ - Content should be more variable within samples than the aggregated content across samples, i.e., $\mathcal{V}_{\mathrm{f}}^{\mathrm{c}} \gg \mathcal{V}_{\mathrm{s}}^{\mathrm{c}}$ .
111
+ - Style should be more variable across samples than within samples, i.e., $\nu_{\mathrm{s}}^{\mathrm{S}} \gg \nu_{\mathrm{s}}^{\mathrm{f}}$ .
112
+ - Within a sample, content should be more variable than style, i.e., $\mathcal{V}_{\mathrm{f}}^{\mathrm{c}} \gg \mathcal{V}_{\mathrm{f}}^{\mathrm{s}}$
113
+ - Across samples, style should be more variable than content, i.e., $\nu_{\mathrm{s}}^{\mathrm{s}} \gg \nu_{\mathrm{s}}^{\mathrm{c}}$
114
+
115
+ We quantify the above contrasts as regularization terms, using the hinge function to cut off gradient back-propagation when the ratio between two variability statistics reaches a certain threshold $r > 1$ , which stands for relativity (Bardes et al., 2022):
116
+
117
+ $$
118
+ \mathcal {L} _ {\text {c o n t e n t}} = \max \left(0, 1 - \frac {\mathcal {V} _ {\mathrm {f}} ^ {\mathrm {c}}}{r \cdot \mathcal {V} _ {\mathrm {s}} ^ {\mathrm {c}}}\right), \quad \left(\mathcal {V} _ {\mathrm {f}} ^ {\mathrm {c}} \gg \mathcal {V} _ {\mathrm {s}} ^ {\mathrm {c}}\right) \tag {9}
119
+ $$
120
+
121
+ $$
122
+ \mathcal {L} _ {\text {s t y l e}} = \max \left(0, 1 - \frac {\mathcal {V} _ {\mathrm {s}} ^ {\mathrm {s}}}{r \cdot \mathcal {V} _ {\mathrm {f}} ^ {\mathrm {s}}}\right), \quad \left(\mathcal {V} _ {\mathrm {s}} ^ {\mathrm {s}} \gg \mathcal {V} _ {\mathrm {f}} ^ {\mathrm {s}}\right) \tag {10}
123
+ $$
124
+
125
+ $$
126
+ \mathcal {L} _ {\text {f r a g m e n t}} = \max \left(0, 1 - \frac {\mathcal {V} _ {\mathrm {f}} ^ {\mathrm {c}}}{r \cdot \mathcal {V} _ {\mathrm {f}} ^ {\mathrm {s}}}\right), \quad \left(\mathcal {V} _ {\mathrm {f}} ^ {\mathrm {c}} \gg \mathcal {V} _ {\mathrm {f}} ^ {\mathrm {s}}\right) \tag {11}
127
+ $$
128
+
129
+ $$
130
+ \mathcal {L} _ {\text {s a m p l e}} = \max \left(0, 1 - \frac {\mathcal {V} _ {\mathrm {s}} ^ {\mathrm {s}}}{r \cdot \mathcal {V} _ {\mathrm {s}} ^ {\mathrm {c}}}\right). \quad \left(\mathcal {V} _ {\mathrm {s}} ^ {\mathrm {s}} \gg \mathcal {V} _ {\mathrm {s}} ^ {\mathrm {c}}\right) \tag {12}
131
+ $$
132
+
133
+ We obtain the V3 regularization term (used in Equation 1) by summing up the four terms:
134
+
135
+ $$
136
+ \mathcal {L} _ {\mathrm {V} 3} = \mathcal {L} _ {\text {c o n t e n t}} + \mathcal {L} _ {\text {s t y l e}} + \mathcal {L} _ {\text {f r a g m e n t}} + \mathcal {L} _ {\text {s a m p l e}}. \tag {13}
137
+ $$
138
+
139
+ Table 1: Evaluation of digit and color disentanglement on PhoneNums using latent retrieval. Values are reported in percentage.
140
+
141
+ <table><tr><td rowspan="3">Method</td><td rowspan="3">K</td><td colspan="4">Content</td><td colspan="4">Style</td></tr><tr><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td></tr><tr><td>zc↑</td><td>zs↓</td><td>zc↑</td><td>zs↓</td><td>zc↓</td><td>zs↑</td><td>zc↓</td><td>zs↑</td></tr><tr><td rowspan="3">V3</td><td>10</td><td>83.2</td><td>12.8</td><td>84.1</td><td>18.5</td><td>14.9</td><td>95.4</td><td>22.6</td><td>91.0</td></tr><tr><td>20</td><td>93.0</td><td>11.6</td><td>92.9</td><td>18.2</td><td>11.9</td><td>93.9</td><td>22.7</td><td>89.9</td></tr><tr><td>40</td><td>86.3</td><td>10.9</td><td>83.4</td><td>18.0</td><td>15.5</td><td>95.3</td><td>22.9</td><td>93.0</td></tr><tr><td rowspan="3">MINE-based</td><td>10</td><td>33.8</td><td>21.6</td><td>36.0</td><td>30.8</td><td>14.0</td><td>35.5</td><td>22.7</td><td>38.6</td></tr><tr><td>20</td><td>41.9</td><td>25.0</td><td>49.5</td><td>25.4</td><td>22.2</td><td>37.5</td><td>33.2</td><td>39.0</td></tr><tr><td>40</td><td>46.8</td><td>23.8</td><td>49.8</td><td>28.0</td><td>26.6</td><td>37.7</td><td>27.6</td><td>48.8</td></tr><tr><td rowspan="3">Cycle loss</td><td>10</td><td>55.1</td><td>25.4</td><td>64.3</td><td>27.8</td><td>17.0</td><td>33.1</td><td>22.6</td><td>37.4</td></tr><tr><td>20</td><td>52.0</td><td>23.6</td><td>58.5</td><td>29.2</td><td>18.2</td><td>35.9</td><td>23.4</td><td>38.8</td></tr><tr><td>40</td><td>53.8</td><td>20.7</td><td>62.8</td><td>22.4</td><td>19.3</td><td>31.6</td><td>24.5</td><td>35.8</td></tr><tr><td>β-VAE</td><td>-</td><td colspan="2">24.9</td><td colspan="2">27.0</td><td colspan="2">25.6</td><td colspan="2">30.5</td></tr><tr><td>EC2-VAE (c)</td><td>-</td><td>95.2</td><td>11.5</td><td>95.1</td><td>18.0</td><td>16.8</td><td>57.7</td><td>22.6</td><td>57.2</td></tr><tr><td>EC2-VAE (c &amp; s)</td><td>-</td><td>95.2</td><td>13.6</td><td>95.1</td><td>19.6</td><td>25.2</td><td>96.2</td><td>31.0</td><td>91.0</td></tr></table>
142
+
143
+ # 4 EXPERIMENTS
144
+
145
+ We evaluate V3 on both synthetic and real data to evaluate its effectiveness and generalizability in different domains and scenarios, covering audio, image and video data. The highlight of this section is that V3 effectively learns disentangled representations of content and style, performs well on out-of-distribution generalization, and the discrete content representations manifest symbolic-level interpretability that aligns well with human knowledge. We also provide additional results in Appendix C, and the ablation study in Appendix D.
146
+
147
+ We compare V3 with three unsupervised baselines: 1) an unsupervised content-style disentanglement based on MINE (Tjandra et al., 2020a), and 2) a 2-branch autoencoder similar to our architecture choice, but trained with the cycle consistency loss after decoding and encoding shuffled combinations of $\tilde{z}^{\mathrm{c}}$ and $z^{\mathrm{s}}$ (Zhu et al., 2017). 3) a vanilla $\beta$ -VAE (Higgins et al., 2017). Additionally, we compare with two methods with label supervision: 1) a weakly-supervised method for disentanglement named EC $^2$ -VAE, in which the model is trained to predict the correct content labels from $z_{ij}^{\mathrm{c}}$ as a replacement of the VQ layer, and the decoder is trained to reconstruct inputs from $z_{ij}^{\mathrm{s}}$ and ground truth content labels (Yang et al., 2019; Wang et al., 2020), and 2) a fully supervised variant of EC $^2$ -VAE provided with both content and style labels, in which the model learns to predict both content and style from their latent representations. We denote them as EC $^2$ -VAE (c) and EC $^2$ -VAE (c & s) respectively. All reported results are the average of three best-performing checkpoints on validation sets. We provide further details of model architectures in Appendix B.
148
+
149
+ # 4.1 DATASETS
150
+
151
+ Written Phone Numbers Dataset (PhoneNums): We synthesize an image dataset of written digit strings on light backgrounds using 8 different ink colors, mimicking a scenario of handwritten phone numbers. The order of digits is random. All images are diversified with noises, blur, and foreground and background color jitters. Models should learn digits and colors as content and style.
152
+
153
+ Monophonic Instrument Notes Dataset (InsNotes): We synthesize a dataset consisting of $16\mathrm{kHz}$ monophonic music audio of 12 different instruments playing 12 different pitches in an octave. Every pitch is played for one second with a random velocity and amplitude envelope. The audio files are then normalized and processed to magnitude spectrograms. Models should learn pitches and timbres as content and style, respectively.
154
+
155
+ Street View House Numbers (SVHN) (Netzer et al., 2011): We select all images with more than one digit from the SVHN dataset. We crop the images to the bounding boxes of the digits and resize them to $32 \times 48$ . Models should learn digits as content, their fonts, texture and colors as style. Note that the styles can be seen as from a continuous space, and the fonts in SVHN are very diverse.
156
+
157
+ Table 2: Evaluation of pitch and timbre disentanglement on InsNotes using latent retrieval. Values are reported in percentage.
158
+
159
+ <table><tr><td rowspan="3">Method</td><td rowspan="3">K</td><td colspan="4">Content</td><td colspan="4">Style</td></tr><tr><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td></tr><tr><td>zc↑</td><td>zs↓</td><td>zc↑</td><td>zs↓</td><td>zc↓</td><td>zs↑</td><td>zc↓</td><td>zs↑</td></tr><tr><td rowspan="3">V3</td><td>12</td><td>89.9</td><td>8.9</td><td>90.1</td><td>15.1</td><td>9.3</td><td>87.5</td><td>15.0</td><td>88.0</td></tr><tr><td>24</td><td>76.2</td><td>8.7</td><td>80.0</td><td>14.2</td><td>12.8</td><td>68.9</td><td>20.3</td><td>70.0</td></tr><tr><td>48</td><td>72.2</td><td>8.4</td><td>74.4</td><td>14.2</td><td>12.3</td><td>72.2</td><td>22.0</td><td>71.5</td></tr><tr><td rowspan="3">MINE-based</td><td>12</td><td>56.4</td><td>7.61</td><td>62.0</td><td>14.2</td><td>10.3</td><td>61.4</td><td>16.9</td><td>63.7</td></tr><tr><td>24</td><td>50.5</td><td>8.5</td><td>59.1</td><td>14.9</td><td>14.7</td><td>53.4</td><td>19.5</td><td>51.4</td></tr><tr><td>48</td><td>44.6</td><td>10.2</td><td>54.0</td><td>16.5</td><td>13.8</td><td>52.1</td><td>18.3</td><td>49.7</td></tr><tr><td rowspan="3">Cycle loss</td><td>12</td><td>49.7</td><td>8.7</td><td>57.9</td><td>15.2</td><td>10.7</td><td>12.7</td><td>18.2</td><td>19.0</td></tr><tr><td>24</td><td>47.0</td><td>8.7</td><td>54.5</td><td>15.2</td><td>14.2</td><td>18.9</td><td>19.4</td><td>23.1</td></tr><tr><td>48</td><td>42.4</td><td>8.0</td><td>49.4</td><td>14.5</td><td>16.2</td><td>20.0</td><td>22.4</td><td>24.4</td></tr><tr><td>β-VAE</td><td>-</td><td colspan="2">18.1</td><td colspan="2">20.8</td><td colspan="2">12.2</td><td colspan="2">19.0</td></tr><tr><td>EC2-VAE (c)</td><td>-</td><td>83.2</td><td>8.0</td><td>86.2</td><td>14.2</td><td>10.7</td><td>60.0</td><td>16.9</td><td>62.8</td></tr><tr><td>EC2-VAE (c &amp; s)</td><td>-</td><td>90.4</td><td>7.9</td><td>90.4</td><td>14.2</td><td>11.1</td><td>90.5</td><td>18.0</td><td>90.4</td></tr></table>
160
+
161
+ Sprites with Actions Dataset (Sprites) (ope; Yingzhen and Mandt, 2018): TheSprites dataset contains animated cartoon characters with random appearances. We use a modified version of the dataset taking video sequences of characters performing 9 different actions in random order. Models should learn the actions as content and appearances as style. Note that the styles can be seen as from a continuous space.
162
+
163
+ Librispeech Clean 100 Hours (Libri100) (Panayotov et al., 2015): Librispeech is a large-scale multi-speaker corpus of read English speech in various accents. We use the "clean" pool of the Librispeech dataset, where we select the 100-hour subset for training. We align the audio to 39 phonemes (24 consonants and 15 vowels) using ground truth transcriptions with the Montreal Forced Aligner (McAuliffe et al., 2017), then extract 80-dimensional log-mel spectrograms and resize them to a length of 64 frames. Models should learn phonemes as content and speakers' voice as style. The styles are considered as from a continuous space as the speakers' voices are very diverse.
164
+
165
+ # 4.2 RESULTS OF CONTENT-STYLE DISENTANGLEMENT
166
+
167
+ On PhoneNums and InsNotes where concrete style labels are available, we evaluate the models' content-style disentanglement ability by conducting a retrieval experiment to examine the nearest neighbors of every input $z^{\mathrm{c}}$ and $z^{\mathrm{s}}$ using ground truth content and style labels, evaluated by the area under the precision-recall curve (PR-AUC) and the best F1 score. We experiment with different codebook sizes $K$ to allow different levels of vocabulary redundancy. The results are shown in Table 1 and Table 2. We see that V3 outperforms unsupervised baselines on both datasets, and the performance is consistent across different codebook sizes $K$ . V3 also outperforms EC $^2$ -VAE (c) in the style retrieval task, which indicates that V3 learns better-disentangled style representations containing less content information. Visualizations of content and style latent representations learned by V3 also show clearer grouping compared to baselines, for which we refer readers to Appendix C.1.
168
+
169
+ Table 3: Linear probing accuracies (in %) for content (digit) classification on SVHN.
170
+
171
+ <table><tr><td>Method</td><td>K</td><td>zc↑</td><td>z^s↓</td></tr><tr><td>V3</td><td>20</td><td>40.6</td><td>18.5</td></tr><tr><td>MINE-based</td><td>20</td><td>36.0</td><td>20.8</td></tr><tr><td>Cycle loss</td><td>20</td><td>16.8</td><td>21.2</td></tr><tr><td>β-VAE</td><td>-</td><td colspan="2">21.8</td></tr><tr><td>Raw input</td><td>-</td><td colspan="2">21.4</td></tr><tr><td>EC²-VAE (c)</td><td>-</td><td>97.0</td><td>21.2</td></tr></table>
172
+
173
+ Table 4: Linear probing accuracies (in %) for content (action) classification onSprites.
174
+
175
+ <table><tr><td>Method</td><td>K</td><td>z^c\uparrow</td><td>z^s\downarrow</td></tr><tr><td>V3</td><td>18</td><td>88.2</td><td>20.2</td></tr><tr><td>MINE-based</td><td>18</td><td>79.1</td><td>22.2</td></tr><tr><td>Cycle loss</td><td>18</td><td>86.4</td><td>39.7</td></tr><tr><td>β-VAE</td><td>-</td><td colspan="2">33.2</td></tr><tr><td>Raw input</td><td>-</td><td colspan="2">99.0</td></tr><tr><td>EC^2-VAE (c)</td><td>-</td><td>99.8</td><td>15.7</td></tr></table>
176
+
177
+ Table 5: Linear probing accuracies (in %) for content (phoneme) classification on Libri100.
178
+
179
+ <table><tr><td>Method</td><td>K</td><td>z^c\uparrow</td><td>z^s\downarrow</td></tr><tr><td>V3</td><td>80</td><td>52.1</td><td>40.4</td></tr><tr><td>MINE-based</td><td>80</td><td>28.6</td><td>51.6</td></tr><tr><td>Cycle loss</td><td>80</td><td>16.1</td><td>50.5</td></tr><tr><td>β-VAE</td><td>-</td><td colspan="2">11.0</td></tr><tr><td>Raw input</td><td>-</td><td colspan="2">31.8</td></tr><tr><td>EC²-VAE (c)</td><td>-</td><td>78.1</td><td>18.2</td></tr></table>
180
+
181
+ Table 6: Speaker verification equal error rates (in %) with average embedding on Libri100.
182
+
183
+ <table><tr><td>Method</td><td>K</td><td>z^c\uparrow</td><td>z^s\downarrow</td></tr><tr><td>V3</td><td>80</td><td>49.5</td><td>42.5</td></tr><tr><td>MINE-based</td><td>80</td><td>49.9</td><td>45.2</td></tr><tr><td>Cycle loss</td><td>80</td><td>49.8</td><td>45.9</td></tr><tr><td>β-VAE</td><td>-</td><td colspan="2">50.0</td></tr><tr><td>Raw input</td><td>-</td><td colspan="2">47.1</td></tr><tr><td>EC^2-VAE (c)</td><td>-</td><td>49.2</td><td>49.6</td></tr></table>
184
+
185
+ On SVHN andSprites where there are no style labels, we evaluate the models' disentanglement ability by linear probing on the learned representations to predict content labels. We also compare with a linear classifier on raw input features. The classifier layer is trained for one epoch before evaluated on the test set. On both datasets we allow a $100\%$ content vocabulary redundancy, resulting in $K = 20$ for SVHN and $K = 18$ forSprites. The resulting accuracies are shown in Table 3 and Table 4. V3 outperforms unsupervised baselines on both datasets, only trailing behind the weakly supervised $\mathrm{EC^2}$ -VAE (c) as the latter's $z^{\mathrm{c}}$ space is optimized for discriminative task.
186
+
187
+ On Libri100, we evaluate the disentanglement ability by linear probing on the learned representations to predict content labels, as well as conducting a vanilla speaker verification experiment using the average embeddings of fragments in every utterance. We also allow a content vocabulary redundancy of about $100\%$ ( $K = 80$ ). The results are shown in Table 5 and Table 6. The content and style embeddings learned by V3 shows better performance on their respective tasks and lower performance on the other task, indicating that V3 learns better disentangled representations.
188
+
189
+ # 4.3 CONTENT CLASSIFICATION ON OUT-OF-DISTRIBUTION STYLES
190
+
191
+ We further evaluate the generalization ability of V3 on PhoneNums and InsNotes by testing the models' content classification performance on a special test set with only unseen styles, provided with few-shot examples. We focus on comparing V3 with the weakly supervised method EC $^2$ -VAE (c) and a pure CNN classifier to evaluate the generalization ability introduced by latent disentanglement. In the $n$ -shot settings, models are presented with $n$ samples of each content and new style combination. All models are continuously trained on new samples until performance stops improving. For V3, we choose the V3 versions with no codebook redundancy for comparison ( $K = 10$ for PhoneNums, $K = 12$ for InsNotes) as they show a one-to-one mapping from codebook entries to content labels (see Section 4.4 and Appendix C.2 for details). We first align the learned codebook entries to ground truth content labels, and obtain classification results by the encoded content representations $z^{\mathrm{c}}$ . For EC $^2$ -VAE, we try two different continuous training strategies: 1) using pseudo content labels from its own predictions for self-boosting, as well as training the reconstruction loss, and 2) only optimize the reconstruction loss. Additionally, we compare with EC $^2$ -VAE and the CNN classifier provided with labels in continuous training. The results are shown in Table 7. Although V3 might fall behind supervised methods in the 0-shot setting, it comes to the lead in few-shot settings on both datasets as the number of extra samples increases. This indicates that V3 can learn by itself to make sense of unseen styles with only a few examples, an ability that emerges from learning representations and disentangled interpretable factors.
192
+
193
+ Table 7: Content classification accuracies (in %) on data with OOD styles.
194
+
195
+ <table><tr><td colspan="2">Pretraining</td><td colspan="2">Continuous Training</td><td colspan="4">PhoneNums</td><td colspan="4">InsNotes</td></tr><tr><td>Method</td><td>Supervision</td><td>Supervision</td><td>Self-boost</td><td>0-shot</td><td>1-shot</td><td>5-shot</td><td>10-shot</td><td>0-shot</td><td>1-shot</td><td>5-shot</td><td>10-shot</td></tr><tr><td>V3</td><td>No</td><td>No</td><td>No</td><td>57.8</td><td>91.3</td><td>97.1</td><td>99.0</td><td>90.5</td><td>97.6</td><td>97.8</td><td>99.2</td></tr><tr><td>\( EC^2 \)-VAE (c)</td><td>Yes</td><td>No</td><td>No</td><td>84.2</td><td>92.1</td><td>92.2</td><td>92.7</td><td>87.1</td><td>87.2</td><td>89.4</td><td>91.2</td></tr><tr><td>\( EC^2 \)-VAE (c)</td><td>Yes</td><td>No</td><td>Yes</td><td>84.2</td><td>91.8</td><td>92.1</td><td>92.4</td><td>87.1</td><td>94.6</td><td>95.0</td><td>95.1</td></tr><tr><td>CNN Classifier</td><td>Yes</td><td>No</td><td>No</td><td>59.5</td><td>59.5</td><td>59.5</td><td>59.5</td><td>92.6</td><td>92.6</td><td>92.6</td><td>92.6</td></tr><tr><td>CNN Classifier</td><td>Yes</td><td>No</td><td>Yes</td><td>59.5</td><td>80.2</td><td>82.2</td><td>82.7</td><td>92.6</td><td>87.6</td><td>85.9</td><td>85.3</td></tr><tr><td>\( EC^2 \)-VAE (c)</td><td>Yes</td><td>Yes</td><td>No</td><td>84.2</td><td>94.6</td><td>98.8</td><td>99.2</td><td>87.1</td><td>97.7</td><td>98.9</td><td>99.8</td></tr><tr><td>CNN Classifier</td><td>Yes</td><td>Yes</td><td>No</td><td>59.5</td><td>81.2</td><td>82.4</td><td>83.5</td><td>92.6</td><td>91.9</td><td>91.3</td><td>89.1</td></tr></table>
196
+
197
+ Table 8: Quantitative results of codebook interpretability on datasets with discrete style labels. Values are reported in percentage.
198
+
199
+ <table><tr><td rowspan="2">Method</td><td colspan="3">PhoneNums</td><td colspan="3">InsNotes</td></tr><tr><td>K</td><td>Acc. ↑</td><td>σ ↓</td><td>K</td><td>Acc. ↑</td><td>σ ↓</td></tr><tr><td rowspan="3">V3</td><td>10</td><td>89.2</td><td>0.6</td><td>12</td><td>99.8</td><td>0.1</td></tr><tr><td>20</td><td>99.7</td><td>1.8</td><td>24</td><td>92.9</td><td>2.2</td></tr><tr><td>40</td><td>99.9</td><td>4.5</td><td>48</td><td>90.2</td><td>4.5</td></tr><tr><td rowspan="3">MINE-based</td><td>10</td><td>40.9</td><td>8.1</td><td>12</td><td>13.8</td><td>3.8</td></tr><tr><td>20</td><td>25.6</td><td>9.8</td><td>24</td><td>29.4</td><td>8.9</td></tr><tr><td>40</td><td>50.6</td><td>6.2</td><td>48</td><td>26.9</td><td>3.6</td></tr><tr><td rowspan="3">Cycle loss</td><td>10</td><td>71.0</td><td>3.7</td><td>12</td><td>27.5</td><td>11.4</td></tr><tr><td>20</td><td>89.6</td><td>4.3</td><td>24</td><td>28.5</td><td>11.9</td></tr><tr><td>40</td><td>99.9</td><td>4.1</td><td>48</td><td>18.2</td><td>6.2</td></tr></table>
200
+
201
+ Table 9: Quantitative results of codebook interpretability on datasets without discrete style labels. Values are reported in percentage.
202
+
203
+ <table><tr><td rowspan="2">Method</td><td colspan="2">SVHN</td><td colspan="2">Sprites</td><td colspan="2">Libri100</td></tr><tr><td>K</td><td>Acc. ↑</td><td>K</td><td>Acc. ↑</td><td>K</td><td>Acc. ↑</td></tr><tr><td>V3</td><td>20</td><td>47.6</td><td>18</td><td>98.5</td><td>80</td><td>24.7</td></tr><tr><td>MINE-based</td><td>20</td><td>26.0</td><td>18</td><td>38.3</td><td>80</td><td>10.8</td></tr><tr><td>Cycle loss</td><td>20</td><td>20.1</td><td>18</td><td>82.0</td><td>80</td><td>10.5</td></tr></table>
204
+
205
+ # 4.4 RESULTS OF SYMBOLIC CONTENT INTERPRETABILITY
206
+
207
+ We notice that interpretable symbols emerge in the learned codebook of V3, showing its ability of abstracting concepts from information. To evaluate the interpretability of learned content representations quantitatively, we propose two metrics: the learned content codebook accuracy and standard deviation among styles. We first align codebook entries to the ground truth content labels by their distributions on content labels, and then calculate the accuracy of codebook entries' distribution regarding their aligned labels. A well learned interpretable codebook should have entries concentrated on content labels they are aligned with, thus showing high accuracy. Also, as a good symbol is a symbol of consensus, on datasets with discrete style labels, we also quantify different styles' discrepancy of codebook entries distribution on content labels using the standard deviation $(\sigma)$ of confusion matrices between codebook entries and content labels, as shown in Table 8. For datasets with no discrete style labels, we report the accuracy of codebook entries in Table 9. Visualizations of confusion matrices between the codebook and ground truth content labels can be found in Appendix C.2.
208
+
209
+ From Table 8 and Table 9, we observe that V3 shows good codebook interpretability by representing content labels with consistent codebook entries, and the consistency is kept well among styles. Table 8 also shows that V3 shows good codebook interpretability with or without vocabulary redundancy, indicating V3 does not rely on the knowledge of the number of content classes to learn interpretable symbols.
210
+
211
+ Qualitatively, we perform content and style recombination by traversing all content codebook entries and decoding them with a fixed style representation. If the learned codebook has good interpretability, the decoding results of recombined content and style representations should show meaningful content changes, and retain consistent styles. Here, we focus on comparing the recombination results of V3 and baselines on SVHN, where we first encode $z^{\mathrm{s}}$ from example fragments, and then recombine it with all $K$ content codebook entries. More experiment results on PhoneNums and InsNotes can be found in Appendix C.2 and our demo website.
212
+
213
+ From Figure 3, we observe that V3 generates images with clear content changes and consistent styles when recombining content and style representations. Although images generated by V3 may not cover all possible contents as many of them never appear in the training set in the given styles, the content are almost all recognizable digits and V3 can even "imagine" reasonably what a digit would look like in a new font and color. In contrast, the baselines generate images with either mixed content and style information, or very subtle changes in content that are hard to interpret. This comparison
214
+
215
+ ![](images/2411ed4aac40d2c0c4c372d4fe7a07cb6eaef6643a55a178fbb797734e909ab8.jpg)
216
+ Figure 3: Comparison of generated images by recombining $z^{\mathrm{s}}$ from given sources in SVHN and all $z^{\mathrm{c}}$ in the learned codebook.
217
+
218
+ not only validates the interpretability of V3's learned codebook resulted from successful content-style disentanglement, but also demonstrates the potential of V3 in style transfer and content editing tasks.
219
+
220
+ # 5 LIMITATION
221
+
222
+ We have identified several limitations in our V3 method that necessitate further investigation. First, while V3 achieves good disentanglement and symbolic interpretability, it is not flawless — samples of different contents (say images of “8” and “9”) may be projected into the same latent code. Inspired by human learning, which effectively integrates both mode-1 and mode-2 cognitive processes, we aim to enhance V3 by incorporating certain feedback or reinforcement. This adaptation could also facilitate the application of V3 to more complex domains such as general image or video. Also, V3 focuses on learning the atomic content symbols that compose meanings of data samples, but the current approach does not learn the meanings of individual symbols or those emerging from specific symbol permutations. Additionally, V3 is currently optimized to disentangle content and style from data samples that include defined fragments. Extending this capability to unsegmented data of large vocabularies, such as continuous audio, represents a significant area for future development. Furthermore, V3 assumes that content elements do not overlap, which does not hold in cases of polyphonic music or mixed audio. Addressing this challenge will require a more sophisticated approach that considers the hierarchical nature of content.
223
+
224
+ # 6 CONCLUSION
225
+
226
+ In conclusion, we contributed an unsupervised content-style disentanglement method named V3. V3's inductive bias is domain-general, intuitive, and concise, solely based on the meta-level insight of the statistical difference between content and style, i.e., their distinct variance-invariance patterns reflected both within and across data samples. Experiment results showed that V3 not only outperforms the baselines in terms of content-style disentanglement, but also demonstrates superior generalizability on OOD styles compared to supervised methods, and achieves high interpretability of learned content symbols. The effectiveness of V3 generalizes across different domains, including audio, image, and video. We believe that V3 has the potential to be applied to emergent knowledge in general, and we plan to extend our method to more complex tasks and domains in the future.
227
+
228
+ # REFERENCES
229
+
230
+ About the Liberated Pixel Cup — lpc.opengameart.org. http://lpc.opengameart.org/. [Accessed 26-09-2024].
231
+ Junwen Bai, Weiran Wang, and Carla P Gomes. Contrastively disentangled sequential variational autoencoder. Advances in Neural Information Processing Systems, 34:10105-10118, 2021.
232
+ Adrien Bardes, Jean Ponce, and Yann LeCun. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. In 10th International Conference on Learning Representations, ICLR 2022, 2022.
233
+
234
+ Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In International conference on machine learning, pages 531-540. PMLR, 2018.
235
+ Russell Sammut Bonnici, Martin Benning, and Charalampos Saitis. Timbre transfer with variational auto encoding and cycle-consistent adversarial networks. In 2022 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE, 2022.
236
+ Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3722-3731, 2017.
237
+ Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in neural information processing systems, 29, 2016.
238
+ Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8789-8797, 2018.
239
+ Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8188-8197, 2020.
240
+ Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Defossez. Simple and controllable music generation. Advances in Neural Information Processing Systems, 36, 2024.
241
+ Hugo Flores Garcia, Prem Seetharaman, Rithesh Kumar, and Bryan Pardo. Vampnet: Music generation via masked acoustic token modeling. arXiv preprint arXiv:2307.04686, 2023.
242
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
243
+ Irina Higgins, Loic Matthew, Arka Pal, Christopher P Burgess, Xavier Glorot, Matthew M Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. *ICLR (Poster)*, 3, 2017.
244
+ Ziming Hong, Zhenyi Wang, Li Shen, Yu Yao, Zhuo Huang, Shiming Chen, Chuanwu Yang, Mingming Gong, and Tongliang Liu. Improving non-transferable representation learning by harnessing content and style. In The Twelfth International Conference on Learning Representations, 2023.
245
+ Wei-Ning Hsu, Yu Zhang, and James Glass. Unsupervised learning of disentangled and interpretable representations from sequential data. Advances in neural information processing systems, 30, 2017.
246
+ Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125-1134, 2017.
247
+ Xu Ji, Joao F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9865-9874, 2019.
248
+ Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, and Nobukatsu Hojo. Stargan-vc: Non-parallel many-to-many voice conversion using star generative adversarial networks. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 266-273. IEEE, 2018.
249
+ Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, and Nobukatsu Hojo. Stargan-vc2: Rethinking conditional methods for stargan-based voice conversion. arXiv preprint arXiv:1907.12279, 2019.
250
+
251
+ Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401-4410, 2019.
252
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
253
+ Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
254
+ Gihyun Kwon and Jong Chul Ye. Clipstyler: Image style transfer with a single text condition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18062-18071, 2022.
255
+ Yann LeCun. A path towards autonomous machine intelligence version 0.9.2, 2022-06-27. Open Review, 62(1), 2022.
256
+ Yinghao Aaron Li, Ali Zare, and Nima Mesgarani. Starganv2-vc: A diverse, unsupervised, non-parallel framework for natural-sounding voice conversion. arXiv preprint arXiv:2107.10394, 2021.
257
+ Liwei Lin, Qiuqiang Kong, Junyan Jiang, and Gus Xia. A unified model for zero-shot music source separation, transcription and synthesis. arXiv preprint arXiv:2108.03456, 2021.
258
+ Liwei Lin, Gus Xia, Junyan Jiang, and Yixiao Zhang. Content-based controls for music large language modeling. arXiv preprint arXiv:2310.17162, 2023.
259
+ Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. Advances in neural information processing systems, 30, 2017.
260
+ Xuanjie Liu, Daniel Chin, Yichen Huang, and Gus Xia. Learning interpretable low-dimensional representation via physical symmetry. arXiv preprint arXiv:2302.10890, 2023.
261
+ Yang Liu, Zhaowen Wang, Hailin Jin, and Ian Wassell. Multi-task adversarial network for disentangled feature learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3743-3751, 2018.
262
+ Chien-Yu Lu, Min-Xin Xue, Chia-Che Chang, Che-Rung Lee, and Li Su. Play as you like: Timbreenhanced multi-modal music style transfer. In Proceedings of the aaai conference on artificial intelligence, volume 33, pages 1061-1068, 2019.
263
+ Yin-Jyun Luo, Sebastian Ewert, and Simon Dixon. Towards robust unsupervised disentanglement of sequential data-a case study using music audio. arXiv preprint arXiv:2205.05871, 2022.
264
+ Yin-Jyun Luo, Sebastian Ewert, and Simon Dixon. Unsupervised pitch-timbre disentanglement of musical instruments using a jacobian disentangled sequential autoencoder. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1036-1040. IEEE, 2024.
265
+ Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. Montreal forced aligner: Trainable text-speech alignment using kaldi. In Interspeech, volume 2017, pages 498-502, 2017.
266
+ Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Computing Surveys, 55(13s):1-42, 2023.
267
+ Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Baolin Wu, Andrew Y Ng, et al. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, page 4. Granada, 2011.
268
+ Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206-5210. IEEE, 2015.
269
+
270
+ Taesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu. Contrastive learning for unpaired image-to-image translation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IX 16, pages 319-345. Springer, 2020.
271
+ Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2085–2094, 2021.
272
+ Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In International Conference on Machine Learning, pages 5171-5180. PMLR, 2019.
273
+ Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, and Mark Hasegawa-Johnson. Autovc: Zero-shot voice style transfer with only autoencoder loss. In International Conference on Machine Learning, pages 5210-5219. PMLR, 2019.
274
+ Kaizhi Qian, Zeyu Jin, Mark Hasegawa-Johnson, and Gautham J Mysore. F0-consistent many-to-many non-parallel voice conversion via conditional autoencoder. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6284-6288. IEEE, 2020a.
275
+ Kaizhi Qian, Yang Zhang, Shiyu Chang, Mark Hasegawa-Johnson, and David Cox. Unsupervised speech decomposition via triple information bottleneck. In International Conference on Machine Learning, pages 7836-7846. PMLR, 2020b.
276
+ Xuanchi Ren, Tao Yang, Yuwang Wang, and Wenjun Zeng. Rethinking content and style: exploring bias for unsupervised disentanglement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1823-1832, 2021.
277
+ Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. Scribbler: Controlling deep image synthesis with sketch and color. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5400-5409, 2017.
278
+ Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200, 2016.
279
+ Hao Tan, Jie Lei, Thomas Wolf, and Mohit Bansal. Vimpac: Video pre-training via masked token prediction and contrastive learning. arXiv preprint arXiv:2106.11250, 2021.
280
+ Andros Tjandra, Ruoming Pang, Yu Zhang, and Shigeki Karita. Unsupervised learning of disentangled speech content and style representation. arXiv preprint arXiv:2010.12973, 2020a.
281
+ Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. Transformer vq-vae for unsupervised unit discovery and speech synthesis: Zerospeech 2020 challenge. arXiv preprint arXiv:2005.11676, 2020b.
282
+ Mohammadhassan Vali and Tom Backström. Interpretable latent space using space-filling curves for phonetic analysis in voice conversion. In Proceedings of Interspeech Conference, 2023.
283
+ Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
284
+ Ziyu Wang, Dingsu Wang, Yixiao Zhang, and Gus Xia. Learning interpretable representation for controllable polyphonic music generation. In Proceedings of 21st International Conference on Music Information Retrieval (ISMIR), 2020.
285
+ Ziyu Wang, Dejing Xu, Gus Xia, and Ying Shan. Audio-to-symbolic arrangement via cross-modal music representation learning. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 181-185. IEEE, 2022.
286
+ Richard Wilhelm, Cary F Baynes, and Carl G Jung. I Ching: Book of changes. Grange Books, 2001.
287
+
288
+ Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, and Shlomo Dubnov. Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE, 2023.
289
+ Shaoan Xie, Qirong Ho, and Kun Zhang. Unsupervised image-to-image translation with density changing regularization. Advances in Neural Information Processing Systems, 35:28545-28558, 2022a.
290
+ Shaoan Xie, Lingjing Kong, Mingming Gong, and Kun Zhang. Multi-domain image generation and translation with identifiability guarantees. In The Eleventh International Conference on Learning Representations, 2022b.
291
+ Yangchen Xie, Xinyuan Chen, Li Sun, and Yue Lu. Dg-font: Deformable generative networks for unsupervised font generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5130-5140, 2021.
292
+ Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021.
293
+ Ruihan Yang, Dingsu Wang, Ziyu Wang, Tianyao Chen, Junyan Jiang, and Gus Xia. Deep music analogy via latent representation disentanglement. In Proceedings of 230st International Conference on Music Information Retrieval (ISMIR), 2019.
294
+ Dacheng Yin, Xuanchi Ren, Chong Luo, Yuwang Wang, Zhiwei Xiong, and Wenjun Zeng. Retriever: Learning content-style representation as a token-level bipartite graph. arXiv preprint arXiv:2202.12307, 2022.
295
+ Li Yingzhen and Stephan Mandt. Disentangled sequential autoencoder. In International Conference on Machine Learning, pages 5670-5679. PMLR, 2018.
296
+ Huan Zhang and Simon Dixon. Disentangling the horowitz factor: Learning content and style from expressive piano performance. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE, 2023.
297
+ Yixiao Zhang, Yukara Ikemiya, Gus Xia, Naoki Murata, Marco Martínez, Wei-Hsiang Liao, Yuki Mitsufuji, and Simon Dixon. Musicmagus: Zero-shot text-to-music editing via diffusion models. arXiv preprint arXiv:2402.06178, 2024.
298
+ Yu Zhang, Peter Tino, Aleš Leonardis, and Ke Tang. A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence, 5(5):726-742, 2021.
299
+ Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223-2232, 2017.
300
+
301
+ # APPENDICES
302
+
303
+ The appendix is structured into 5 main parts. Appendix A provides specifics about the datasets involved in the paper. Appendix B presents implementation and training details of V3 and baseline methods. Appendix C provides additional experiment results and especially visualizations for better understanding. Appendix D presents an ablation study on the V3 model. Finally, we provide an analysis on learning content and style in Appendix E.
304
+
305
+ # A DATASET DETAILS
306
+
307
+ # A.1 PHONENUMS
308
+
309
+ The written phone numbers dataset is designed to represent a clear content and style separation to human. We use the Kristen ITC font for the style because its digits look similar to handwritten digits and are easy to distinguish. We render the digits from 0 to 9 on a light background of RGB (10, 10, 10) using the foreground colors listed in Table 10. For more randomness, we first jitter the foreground
310
+
311
+ Table 10: List of colors and their corresponding RGB values. Colors only used for out-of-distribution experiments in Section 4.3 are marked with *
312
+
313
+ <table><tr><td>RGB Values #</td><td>Color</td></tr><tr><td>(8, 8, 8)</td><td>Black</td></tr><tr><td>(8, 8, 248)</td><td>Blue</td></tr><tr><td>(8, 128, 8)</td><td>Green</td></tr><tr><td>(248, 8, 8)</td><td>Red</td></tr><tr><td>(8, 128, 128)</td><td>Teal</td></tr><tr><td>(128, 8, 128)</td><td>Purple</td></tr><tr><td>(248, 163, 8)</td><td>Orange</td></tr><tr><td>(163, 47, 47)</td><td>Brown</td></tr><tr><td>(248, 188, 199)</td><td>Pink *</td></tr><tr><td>(243, 128, 115)</td><td>Salmon *</td></tr><tr><td>(248, 210, 8)</td><td>Gold *</td></tr><tr><td>(8, 248, 8)</td><td>Lime *</td></tr><tr><td>(8, 248, 248)</td><td>Cyan *</td></tr><tr><td>(248, 8, 248)</td><td>Magenta *</td></tr><tr><td>(128, 128, 128)</td><td>Gray *</td></tr><tr><td>(200, 133, 67)</td><td>Peru *</td></tr></table>
314
+
315
+ and background colors by a noise from -2 to 2 along every channel, then add a small Gaussian noise. We then translate all digits vertically or horizontally by a random number of pixels between -2 and 2. Lastly, we add a random Gaussian blur effect. Our dataset contains 100000 images in total, each of which has 10 digits. The dataset is split into the train set, validation set, and test set with a ratio of 8:1:1. Some samples of the dataset can be viewed in Figure 4
316
+
317
+ <table><tr><td>2950683147</td><td>7340295168</td><td>9075681423</td><td>4928705316</td></tr><tr><td>0781639452</td><td>7358124096</td><td>8356419027</td><td>3917084265</td></tr><tr><td>2865714930</td><td>8312045769</td><td>6824570931</td><td>1279548603</td></tr><tr><td>3749506128</td><td>3752618049</td><td>5283164790</td><td>0687295341</td></tr></table>
318
+
319
+ Figure 4: Left: example training data in PhoneNums. Right: example data for out-of-distribution evaluation in PhoneNums.
320
+
321
+ # A.2 INSNOTES
322
+
323
+ In InsNotes, we collect monophonic music audio played by different instruments. This dataset is also designed based on the understanding that this domain exhibits content and style concepts that are
324
+
325
+ clear to human - music pitches and instrument timbres. The dataset consists of monophonic music audio rendered from 12 instruments playing 12 different pitches in an octave, which corresponds to MIDI numbers from 60 to 71. All the instruments selected have little exponential decays, so their timbres can be well represented with short audio samples. In the out-of-distribution generalization experiment in Section 4.3, we select four unseen instruments: two with slight exponential decays and two with strong attacks and distinct exponential decays, making the task particularly challenging. We list the instruments involved as well as the specific MIDI program selected in Table 11.
326
+
327
+ Table 11: List of instruments and their corresponding MIDI program numbers. Instruments only used for out-of-distribution generalization are marked with * .
328
+
329
+ <table><tr><td>Program #</td><td>Instrument</td></tr><tr><td>19</td><td>Pipe organ</td></tr><tr><td>21</td><td>Accordion</td></tr><tr><td>22</td><td>Harmonica</td></tr><tr><td>41</td><td>Viola</td></tr><tr><td>52</td><td>Choir aahs</td></tr><tr><td>56</td><td>Trumpet</td></tr><tr><td>59</td><td>Muted trumpet</td></tr><tr><td>64</td><td>Soprano sax</td></tr><tr><td>68</td><td>Oboe</td></tr><tr><td>71</td><td>Clarinet</td></tr><tr><td>72</td><td>Piccolo</td></tr><tr><td>75</td><td>Pan flute</td></tr><tr><td>0</td><td>Grand piano *</td></tr><tr><td>4</td><td>Tine electric piano *</td></tr><tr><td>73</td><td>Flute *</td></tr><tr><td>78</td><td>Irish flute *</td></tr></table>
330
+
331
+ For every instrument, we play every pitch for one second one by one with a random velocity between 80 and 120 until every pitch is played 10 times. We synthesize 100 such takes at $16\mathrm{kHz}$ using a soundfont library for each instrument and further diversify every note by adding a random amplitude envelope to each note. The added amplitude envelope is either a linear curve or a sinusoidal curve, starting and ending at a random amplitude factor between 0.8 and 1.2. The audio files are then normalized and processed with short-time Fourier transform (STFT) with the FFT size of 1024 and hop size of 512 to obtain the magnitude spectrograms, which results in a $512 \times 32$ matrix for each note. To avoid possible overlap between adjacent notes, we add a 0.056 second pause in between, resulting in one transition frame in the spectrogram. Our dataset contains 1200 audio files in total, each of which has 120 notes. The dataset is split into the train set, validation set, and test set with a ratio of 8:1:1.
332
+
333
+ # A.3 SVHN
334
+
335
+ The Street View House Numbers (SVHN) dataset is a real-world dataset that contains images of house numbers collected from Google Street View, and digit-level bounding boxes (Netzer et al., 2011). Examples of the dataset can be viewed in Figure 5. The dataset originally consists of 73257 digits for training, 26032 digits for testing, and 531131 additional, somewhat less difficult samples, as the extra partition. We split the extra partition into additional training, validation and testing sets with a ratio of 8:1:1. For the content-style disentanglement task, we select all images with at least two labeled digits, and resize the bounding boxes to $32 \times 48$ pixels. The dataset is preprocessed by normalizing the pixel values to the range of [0, 1]. Compared to PhoneNums, although both being image datasets with digits as content, SVHN is significantly more challenging for the following reasons: 1) The digits in SVHN can be very blurry compared to that in PhoneNums; 2) The digits in SVHN come with more flexible styles in a totally continuous space, involving different fonts, thicknesses, inclinations, colors, and so on; 3) In every SVHN image, the style variation among digits can be more significant than that in PhoneNums, as there can be environmental factors like shadows; 4) The bounding boxes are not always tight, clean and complete, and the digits are not always centered in the image; 6) The classes are very imbalanced. Almost all images come with an 0, 1 or 2 but very few of them have 8
336
+
337
+ ![](images/7bce70e909a2aa331f1e487ab95cf6416dd9faf6aedf94ce2552caa8932f098f.jpg)
338
+ Figure 5: Example data in the original SVHN dataset. The digits in the images are bounded by the red boxes.
339
+
340
+ or 9; 5) Most importantly, house numbers are generally very short strings. Among images with at least two digits, $57.6\%$ of them have exactly two digits, which means for most styles, there is not a full coverage of all digits, and during training, V3 only has a highly incomplete view of the full content vocabulary. We choose SVHN to demonstrate the robustness of V3 in learning content and style disentanglement in a much more challenging setting.
341
+
342
+ # A.4 SPRITES
343
+
344
+ The originalSprites dataset, collected from ope and adopted by Yingzhen and Mandt (2018), contains animated cartoon characters in the pixel graphic style with random appearances and actions. The originalSprites dataset contains animations of six different actions in four perspectives. We collect theSprites with Actions dataset used in this study by selecting 3 distinct actions in 3 perspectives, resulting in 9 different actions in total, and rendering videos of characters performing actions from these 9 categories randomly, using the critical frames from each action animation. The dataset contains 2160 videos in total, each of which has 9 frames. The characters differ in their hair, body, top and bottom, forming 2160 unique characters in total. We use $80\%$ of the characters for training and the rest for validation and testing. Examples of the dataset can be viewed in the right of Figure 1.
345
+
346
+ # A.5 LIBRI100
347
+
348
+ The Libri100 dataset is a subset of the LibriSpeech dataset (Panayotov et al., 2015), containing the "train-clean-100", "dev-clean", and "test-clean" subsets. There are 331 different speakers in total, in which 165 are female and 166 are male. There are no overlapping speakers between the train, validation, and test divisions. Given audio files and ground truth transcriptions, we align the audio with the 39 phonemes used in English using the Montreal Forced Aligner (McAuliffe et al., 2017). After normalizing the cropped fragments, we extract the mel spectrograms with a window size of 16ms, hop size of 5ms, and 80 mel bands. The 39 phonemes are indexed as shown in Table 12.
349
+
350
+ # B IMPLEMENTATION DETAILS
351
+
352
+ # B.1 MODEL ARCHITECTURE
353
+
354
+ On InsNotes, we instantiate V3 model using a ResNet18 encoder and a ResNet18T decoder with bottlenecks (He et al., 2016). In the encoder, the number of channels in the first convolutional layer is set to 64, and gradually increase to 512 in the last layer. The first half of the encoder uses a kernel size of 9 and the second half uses a kernel size of 5. The decoder is symmetric to the encoder. The latent dimension is set to 512. The total number of trainable parameters is 55M.
355
+
356
+ On PhoneNums andSprites, we instantiate V3 model using a ResNet encoder and a ResNetT decoder half deep as the pitch and timbre learning task. Similarly, the number of channels in the first convolutional layer is set to 16, and gradually increase to 256 in the last layer. The encoder uses a
357
+
358
+ Table 12: List of phonemes with their indices.
359
+
360
+ <table><tr><td>Index</td><td>Phoneme</td><td>Index</td><td>Phoneme</td><td>Index</td><td>Phoneme</td></tr><tr><td>0</td><td>eh</td><td>13</td><td>ao</td><td>26</td><td>l</td></tr><tr><td>1</td><td>z</td><td>14</td><td>ey</td><td>27</td><td>k</td></tr><tr><td>2</td><td>s</td><td>15</td><td>hh</td><td>28</td><td>m</td></tr><tr><td>3</td><td>uw</td><td>16</td><td>y</td><td>29</td><td>ch</td></tr><tr><td>4</td><td>aw</td><td>17</td><td>f</td><td>30</td><td>ng</td></tr><tr><td>5</td><td>oy</td><td>18</td><td>r</td><td>31</td><td>t</td></tr><tr><td>6</td><td>dx</td><td>19</td><td>g</td><td>32</td><td>w</td></tr><tr><td>7</td><td>dh</td><td>20</td><td>v</td><td>33</td><td>ae</td></tr><tr><td>8</td><td>uh</td><td>21</td><td>ah</td><td>34</td><td>iy</td></tr><tr><td>9</td><td>aa</td><td>22</td><td>er</td><td>35</td><td>th</td></tr><tr><td>10</td><td>d</td><td>23</td><td>ow</td><td>36</td><td>ay</td></tr><tr><td>11</td><td>p</td><td>24</td><td>sh</td><td>37</td><td>ih</td></tr><tr><td>12</td><td>n</td><td>25</td><td>b</td><td>38</td><td>jh</td></tr></table>
361
+
362
+ kernel size of 5. The decoder is symmetric to the encoder. The latent dimension is set to 512. The total number of trainable parameters is 20M on PhoneNums and 25M onSprites.
363
+
364
+ On SVHN, we add one more ResNet layer in every ResBlock on top of the ResNet encoder used in PhoneNums andSprites. The number of channels in the first convolutional layer is set to 32, and gradually increase to 512 in the last layer. The first half of the encoder uses a kernel size of 5 and the second half uses a kernel size of 3. The decoder is symmetric to the encoder. The latent dimension is set to 768. The total number of trainable parameters is 37M.
365
+
366
+ On Libri100, we use a similar architecture as InsNotes, but with a maximum number of channels of 256. We deepen the encoder with 2 more ResNet blocks in each layer. The total number of trainable parameters is 24M.
367
+
368
+ We use the same neural network architecture as V3 for the MINE-based baseline and the cycle loss-based baseline, except that the style branch of the MINE-based method has a variational latent layer. For the MINE-based baseline, we use a 3-layer multi-layer perceptron with 512 hidden units to estimate the mutual information. For the supervised baselines $\mathrm{EC^2}$ -VAE (c), we replace the VQ layer of the content branch with a linear layer projecting to the dimension of prediction logits. Besides, the encoder output of the style branch are mean and log variance vectors instead of representation vectors, which means the style branch is a variational autoencoder (VAE) (Kingma and Welling, 2013). For the fully supervised baseline $\mathrm{EC^2}$ -VAE (c/s), we project the reparameterized style vectors to the dimension of prediction logits.
369
+
370
+ # B.2 TRAINING DETAILS
371
+
372
+ For all models, we use the Adam optimizer with a learning rate of 0.001 (Kingma and Ba, 2014). The fragment sizes on PhoneNums, InsNotes, SVHN andSprites are set to 10, 12, 2 and 6, respectively. The relativity $r$ is set to 15, 15, 5, 10 and 5 on PhoneNums, InsNotes, SVHN,Sprites and Libri100, respectively (Generally, we recommend setting a higher $r$ , such as 15, on datasets with clean content and style separation, and setting a lower $r$ , such as 5, on more complex datasets where reconstruction might need to be emphasized more.). The V3 loss weight $\beta$ is defaultly set to 1 on InsNotes task, and 0.1 in other datasets. For all VQ-based models, we update the codebooks using exponential moving average with a decay rate of 0.95 (Van Den Oord et al., 2017). The commitment loss weight $\alpha$ is set to 0.01. On PhoneNums and InsNotes, we set a threshold of $\frac{n}{10K}$ for dead code relaunching to improve codebook utilization, where $n$ is total the number of fragments in a batch. On SVHN, as most images have only 2 or 3 digits, we concatenate fragments in different samples for a higher content coverage in practice to stabilize training. Similarly, on Librispeech, as many consonant phonemes do not exhibit distinct styles like vowels, we also smooth the styles by taking the average of adjacent fragments in practice. For MINE-based baseline models, we update the MINE network once every global iteration using the Adam optimizer and adaptive gradient scaling Tjandra et al. (2020a); Belghazi et al. (2018). The learning rate of the MINE network is set to 0.0002.
373
+
374
+ We train all models using an exponential decay learning rate scheduler, and take the model with the best validation loss as the final model. All models are trained on a single Nvidia RTX 4090 GPU. The V3 loss should decay to zero within a few epochs after training starts. All supervised learning methods converge within 2 hours, while the converging time of all unsupervised learning methods differs from 5 hours to 24 hours.
375
+
376
+ ![](images/573dc7414dd65f7503b252aa8d346e87c5ae7754e5bf5c4377c92adad1de4057.jpg)
377
+ Figure 6: t-SNE visualization of the learned digit (content) and color (style) representations on PhoneNums when there is no codebook redundancy ( $K = 10$ ).
378
+
379
+ ![](images/c8269a3af35f5111559e9cd864f6cfa060af02721da8abda7542b106c2fe6744.jpg)
380
+ Figure 7: t-SNE visualization of the learned digit (content) and color (style) representations on PhoneNums when the codebooks are redundant.
381
+
382
+ ![](images/5829540c4ed7ba669ebf3e9ef001bf11382191c0f976f90fe550f4580942e0f8.jpg)
383
+ Figure 8: t-SNE visualization of the learned pitch (content) and timbre (style) representations on InsNotes when there is no codebook redundancy ( $K = 12$ ).
384
+
385
+ ![](images/ab9eeaf6a9c859be92dec6f67f5b9bd9359f2ce7b272a5798abde23871c377f5.jpg)
386
+ Figure 9: t-SNE visualization of the learned pitch (content) and timbre (style) representations on InsNotes when the codebooks are redundant.
387
+
388
+ # C MORE EXPERIMENT RESULTS
389
+
390
+ In this part, we provide extra experiment results in addition to the results in Section 4. We will focus more on the visualizations of the learned content and style representations, and the alignment between the learned codebooks and the ground truth content labels.
391
+
392
+ # C.1 RESULTS OF CONTENT-STYLE DISENTANGLEMENT
393
+
394
+ This section provides 3-dimensional t-SNE visualization results of the learning content and style representations in support of Section 4.2. We show that on different datasets and under different $K$ settings, content and style representations learned by V3 show the clearest groupings compared to baselines, and the groupings match well with ground truth content and style labels.
395
+
396
+ On PhoneNums, we first visualize with t-SNE the learned content and style representations when there is no codebook redundancy ( $K = 10$ ), and color them by the ground truth content or style labels. We set $K = 12$ for learning digits and colors. The results are shown in Figure 6. We can see that V3 learns clearer content and style representations in groups compared to unsupervised baselines. When the codebooks contain redundancy, the results are shown in Figure 7. We can see that V3 still achieves the clearest content and style grouping.
397
+
398
+ On InsNotes, the visualizations of $z^{\mathrm{c}}$ and $z^{\mathrm{s}}$ when $K = 12$ are shown in Figure 8, and the visualizations when codebook is redundant are shown in Figure 9. Both results also show V3 groups content and style better than baselines.
399
+
400
+ On SVHN, the visualizations of $z^{\mathrm{c}}$ are shown in Figure 10. Since SVHN does not have discrete style labels, we only show the grouping of content representations. V3 is trained at $K = 20$ . Although not as good as the results shown in Figure 6 on PhoneNums, V3 still achieves the best grouping of digits with learned content representations among unsupervised methods.
401
+
402
+ ![](images/b88e8169b7f2abeccd357f5ef608ef8e987d508164be3e95d58b75e87eb12b55.jpg)
403
+ Figure 10: t-SNE visualization of the learned content (digit) representations on SVHN.
404
+
405
+ OnSprites, there is no discrete style label either. Figure 11 shows the t-SNE visualizations of $z^{\mathrm{c}}$ , and V3 is trained at $K = 18$ . Both V3 and Cycle loss achieve good content grouping, but it is observable that some clusters of the cycle loss $z^{\mathrm{c}}$ have broken into several subclusters, indicating that there is still content and style entanglement. This is also supported by Section 4.2.
406
+
407
+ ![](images/52259bf94e7cbf17e91c559e3aa04f23bc8ec3473e19136adb3c1cb9a27c76af.jpg)
408
+ Figure 11: t-SNE visualization of the learned content (action) representations onSprites. The 10 colors refer to 10 different actions.
409
+
410
+ # C.2 RESULTS OF SYMBOLIC CONTENT INTERPRETABILITY
411
+
412
+ This section provides intuitive visualizations about how the learned content codebook entries align with ground truth content labels in Section 4.4. We first collect frequencies of every content encoded to every codebook entry, and then permute the codebook to make the confusion matrix look like an eye for a clear alignment. Then we plot heatmaps of confusion matrices between codebook entries (vertical axes) and content labels (horizontal axes).
413
+
414
+ On PhoneNums and InsNotes, we plot the confusion matrices under different $K$ settings in Figure 12 and Figure 13, respectively. The results show V3 achieves the clearest symbol interpretability in all $K$ settings. Results on SVHN andSprites are shown in Figure 14 and Figure 15. OnSprites, both V3 and cycle loss learns codebooks with good interpretability, but V3 still has fewer misclassifications. Although V3 does not learn a clear on-to-one codebook entry to content label mapping on SVHN, it still shows a clearer alignment relationship than other methods. An interesting fact is the order of learning we observe during training — V3 usually first distinguish 0 and 1, then start to understand 2 is different, then 3. It often confuses between 5 and 6 and between 1 and 7, and it usually fails to learn 8 and 9. This human-like learning trajectory might be subject to both the ratio of content classes and their pairwise similarities in shape. The similar phenomenon is observed in the confusion matrices on Libri100 as shown in Figure 16. V3 distinguishes between consonants and vowels pretty well, and confuses between "z" and "s", and between "n" and "ng", which are phonetically similar.
415
+
416
+ To further investigate the disentanglement ability of models, we perform latent representation recombination using the trained models. Figure 3 has already demonstrated the results on SVHN of decoding a fixed $z^{\mathrm{s}}$ with every $z^{\mathrm{c}}$ . Here we show the results on PhoneNums, where instead of using a fixed $z^{\mathrm{s}}$ encoded from an example, we compute the mean $z^{\mathrm{s}}$ of all fragments from a class as its style representations for decoding. We select the V3 model with $K = 10$ , align codebook entries with digit labels, and enumerate all combinations of $z^{\mathrm{c}}$ and $z^{\mathrm{s}}$ . We present the results in Figure 17. Compared to baselines, V3 can fairly well reconstruct the involved 8 colors and the digits from 0 to 9, even though it is not informed with any discrete labels during training. In contrast, the MINE-based baseline and the cycle loss baseline fail to distinguish the digits, although the color reconstruction is not bad. They generate blurry digits that look like "5", "8" or "6", which are the most conservative choices. As for results on the music dataset InsNotes, we refer you to our web demo page for an interactive experience.
417
+
418
+ # D ABLATION STUDY
419
+
420
+ For ablation, we experiment with another type of variability measurement $\nu_{k}(\cdot)$ , which is standard deviation (SD). Besides, we train four variants of V3, each without one of the four regularization terms defined in Equation 9-12. We conduct experiments on PhoneNums and InsNotes, two datasets with style labels available, and evaluate the content and style disentanglement performances. The results are reported in Table 13 and Table 14. It can be seen that $\nu_{k} = \mathrm{SD}$ does not work as well as $\nu_{k} = \mathrm{MPD}$ , which can be explained by its weakness in constraining multi-peak content distributions
421
+
422
+ ![](images/4e3d96c4fcb5b1d8ba59951d96df7d428c2cd312a3b679e6daf4e1b5aeb0e337.jpg)
423
+ Figure 12: Confusion matrices of learned codebooks on PhoneNums. The horizontal axes show digit labels from "0" to "9", and the vertical axes show codebook atoms sorted by ground truth digit labels.
424
+
425
+ ![](images/b27b64df9c78244557439062f836d01d4450535688e4433f86d115faeefef73f.jpg)
426
+ Figure 13: Confusion matrices of learned codebooks on InsNotes. The horizontal axes show pitch labels from "C" to "B", and the vertical axes show codebook atoms sorted by ground truth pitch labels.
427
+
428
+ ![](images/6e6113d84ea6123ba6ab072ed8fd114375df5aedcfd26a25c47d05e7b52be07e.jpg)
429
+ Figure 14: Confusion matrices of learned codebooks on SVHN. The horizontal axes show digit labels from "0" to "9", and the vertical axes show codebook atoms sorted by ground truth digit labels.
430
+
431
+ ![](images/1400fcca99d319a58ea26201574168c867bd8cc5b92599b9cc4f45a477136561.jpg)
432
+ Figure 15: Confusion matrices of learned codebooks onSprites. The horizontal axes are different action labels, and the vertical axes show codebook atoms sorted by ground truth action labels.
433
+
434
+ within samples. It is also worth noting that V3 sometimes performs fairly well even when discarding one of its terms. In these cases, we observe a decrease in the discarded loss even if we do not explicitly optimize for it. We suspect this is due to the robustness of V3 constraints as reflected in the symmetric relationships among the four losses—we can enforce three relations, and the fourth one may fall into the right place automatically. However, in practice, it is difficult to tell the one term to free beforehand as it is also related to detailed content and style variations in specific domains. As a result, the V3 constraints as a whole shows robust performance across domains.
435
+
436
+ ![](images/43877bbbfec6703aa88bf850b28203e2619357878d0cc24a84438abc4777f3c8.jpg)
437
+ Figure 16: Confusion matrices of learned codebooks on Libri100. The horizontal axes are different phoneme labels, and the vertical axes show codebook atoms sorted by ground truth phoneme labels.
438
+
439
+ ![](images/466be9e6da597f7552c9a7436f974901ba2e9c2f44165f1bce7877054481d571.jpg)
440
+
441
+ ![](images/8ab46b6a75968ca4ca0aad2cabc40c3ee96eb19f0f3c191c2e78a4654cb4d6c3.jpg)
442
+
443
+ ![](images/e2ce638ff74c318dbd01931db39d03a5b2525177e298290b8ad92caea2a1351f.jpg)
444
+ Figure 17: Comparison of generated digits by recombining content and style latents using unsupervised methods trained on PhoneNums.
445
+
446
+ # E DISCUSSION
447
+
448
+ Connection between Content-Style Disentanglement and OOD Generalizability: Disentanglement can intuitively boost OOD generalization for several key reasons. By separating different factors, like content and style, the model can focus on the important features without getting distracted by irrelevant variations. This separation makes the model more robust to changes. For instance, if the style changes in an OOD sample while the content remains similar, the model might still recognize and process the content effectively. Additionally, disentangled representations often lead to more
449
+
450
+ Table 13: Ablation study of V3 settings on content-style disentanglement performance on PhoneNums. Values are reported in percentage.
451
+
452
+ <table><tr><td rowspan="3">Method</td><td rowspan="3">K</td><td colspan="4">Content</td><td colspan="4">Style</td></tr><tr><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td></tr><tr><td>zc↑</td><td>zs↓</td><td>zc↑</td><td>zs↓</td><td>zc↓</td><td>zs↑</td><td>zc↓</td><td>zs↑</td></tr><tr><td>V3</td><td>10</td><td>83.2</td><td>12.8</td><td>84.1</td><td>18.5</td><td>14.9</td><td>95.4</td><td>22.6</td><td>91.0</td></tr><tr><td>V3 (νk=SD)</td><td>10</td><td>42.7</td><td>17.9</td><td>53.5</td><td>21.5</td><td>18.2</td><td>49.9</td><td>24.7</td><td>51.6</td></tr><tr><td>V3 (w/o Lcontent)</td><td>10</td><td>43.8</td><td>13.8</td><td>51.2</td><td>18.9</td><td>18.1</td><td>90.8</td><td>22.4</td><td>87.5</td></tr><tr><td>V3 (w/o Lstyle)</td><td>10</td><td>64.6</td><td>13.2</td><td>70.3</td><td>18.7</td><td>17.7</td><td>87.8</td><td>24.5</td><td>83.9</td></tr><tr><td>V3 (w/o Lfragment)</td><td>10</td><td>96.3</td><td>11.7</td><td>98.9</td><td>17.9</td><td>15.9</td><td>90.1</td><td>23.2</td><td>88.5</td></tr><tr><td>V3 (w/o Lsample)</td><td>10</td><td>47.0</td><td>12.7</td><td>57.8</td><td>18.9</td><td>15.4</td><td>88.4</td><td>24.7</td><td>85.3</td></tr></table>
453
+
454
+ Table 14: Ablation study of V3 settings on content-style disentanglement performance on InsNotes. Values are reported in percentage.
455
+
456
+ <table><tr><td rowspan="3">Method</td><td rowspan="3">K</td><td colspan="4">Content</td><td colspan="4">Style</td></tr><tr><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td><td colspan="2">PR-AUC</td><td colspan="2">Best F1</td></tr><tr><td>z^c ↑</td><td>vz^s ↓</td><td>z^c ↑</td><td>z^s ↓</td><td>z^c ↓</td><td>z^s ↑</td><td>z^c ↓</td><td>z^s ↑</td></tr><tr><td>V3</td><td>12</td><td>89.9</td><td>8.9</td><td>90.1</td><td>15.1</td><td>9.3</td><td>87.5</td><td>15.0</td><td>88.0</td></tr><tr><td>V3 (νk=SD)</td><td>12</td><td>12.9</td><td>9.9</td><td>17.5</td><td>15.0</td><td>16.3</td><td>24.7</td><td>24.1</td><td>36.5</td></tr><tr><td>V3 (w/o Lcontent)</td><td>12</td><td>19.2</td><td>9.3</td><td>28.0</td><td>14.3</td><td>13.6</td><td>66.2</td><td>19.0</td><td>68.4</td></tr><tr><td>V3 (w/o Lstyle)</td><td>12</td><td>72.1</td><td>8.9</td><td>14.2</td><td>84.0</td><td>13.7</td><td>78.7</td><td>23.6</td><td>79.0</td></tr><tr><td>V3 (w/o Lfragment)</td><td>12</td><td>26.0</td><td>12.1</td><td>35.7</td><td>17.6</td><td>13.5</td><td>53.7</td><td>20.1</td><td>56.7</td></tr><tr><td>V3 (w/o Lsample)</td><td>12</td><td>86.4</td><td>7.9</td><td>89.3</td><td>14.2</td><td>11.3</td><td>50.7</td><td>19.4</td><td>56.2</td></tr></table>
457
+
458
+ generalized features, enabling the model to identify important patterns that are invariant across different distributions. This property facilitates easier transfer learning because models with disentangled representations can be more readily fine-tuned for new tasks, as supported by our experiments in Section 4.3.
459
+
460
+ Connection between Content-Style Disentanglement and Symbolic Interpretability: In Section 4.2 and Section 4.4, we separately examined content-style disentanglement and symbolic-level interpretability. This discussion now seeks to understand how these elements are interconnected—specifically, whether V3's disentangled representation space inherently improves symbolic-level interpretability.
461
+
462
+ The transition from purely observational data to symbolic representation remains an open question in cognitive science and artificial intelligence. We suggest that robust content-style disentanglement is closely linked to better symbolic interpretability, as evidenced in Tables 1 to 4, 8 and 9. These figures show that both V3 and supervised methods, which achieve better disentanglement, also provide superior interpretability compared to methods less effective in disentanglement (supervised classification — while it is not — can here be viewed as another form of interpretable VQ symbols). Additionally, as illustrated in Figures 6 to 11, well-disentangled style spaces (meaning they contain less content information) see well-formed clusters, which can facilitate straightforward postprocessing for discrete and symbolic labeling.
463
+
464
+ Connection between Content-Style Disentanglement and Philosophy: The statistical patterns and mutual relationship between content and style closely correspond to the philosophical concept of Yin and Yang, a fundamental duality in universal balance and dynamics. In the famous Yin-Yang diagram, Yin and Yang are equally divided and composed of two identical shapes (the fish-like swirl and the small dot) with opposing colors (light and dark), together forming the completeness of the world Wilhelm et al. (2001).
465
+
466
+ We can interpret the two fundamental shapes as the two axes along which data is observed — the swirl represents observation across samples, while the dot represents observation across fragments within samples. The two colors signify two kinds of dynamics — light denotes variant, and dark denotes invariant. As a result, the duality of Yin and Yang becomes the duality of content and style. Content shows variability within samples (the light dot) and invariability of vocabulary across samples (the
467
+
468
+ ![](images/721012dd9f31c9209948a652aa2b85380d13d604c41ebfe3f414fdb3cd9bf808.jpg)
469
+ Figure 18: An illustration of the correspondence between the content-style duality and the Yin-Yang duality.
470
+
471
+ dark swirl), while style shows variability across samples (the light swirl) and invariability within a sample (the dark dot). They can be disentangled from data as two components, yet neither can exist alone.
472
+
473
+ V3 and related works: We explicate the difference and connection between V3 and several other most relevant works as below.
474
+
475
+ - InfoGAN (Chen et al., 2016): InfoGAN is similar to our approach in that both models learn interpretable representations and decouple these representations from the data. However, there are several key differences: 1) Each representation in InfoGAN is of very low dimensionality; 3) The specific aspects learned are less controllable, while V3 focuses on learning the distinctions of content and style; 3) GAN is known to be less unstable in training than autoencoders and VAEs, and it is a framework more for generative modeling than representation learning.
476
+ - DSAE and variants (Hsu et al., 2017; Yingzhen and Mandt, 2018; Bai et al., 2021; Luo et al., 2022; 2024): DSAE shares similar insights with V3 regarding the intrinsic relationship between content and style, but it primarily focuses on the invariability of style and the variability of content within a sample, giving less attention to the invariability of content vocabulary and the variability of style across a broader scope. Other distinctions include: 1) DSAE focuses on learning a fixed style representation for a whole sample, which may struggle with samples where the style varies, such as singing performances that feature both chest voice and falsetto, or instrument performances with multiple articulations; 2) The DSAE family requires access to the entire sequence when encoding style; 3) Most importantly, the content learned in DSAE is context-dependent, while V3 emphasizes on learning more universal content representations.
477
+ - VICReg (Bardes et al., 2022): V3 has a similar form of loss function as VICReg, and both models leverage variance and invariance among entities to help training representation learning frameworks. However, VICReg focuses on learning a single representation for each entity, while V3 focuses on learning disentangled representations. Also, VICReg learns discriminative representations without clear interpretability, but V3 learns interpretable content symbols, and has a decoding ability to recombine content-style pairs. In fact, we draw on their mathematical representations and the idea of using regularization to prevent latent representation from collapsing, a concept also advocated by LeCun (2022).
2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2243bec8441f32b125f5ba75a3ccefa61ca4b29145befcdf9a96799123eccc87
3
+ size 1476750
2025/Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Meta-Learning via In-Context Learning/8959c2e4-7cb5-4574-88e7-fc386607e7ca_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Meta-Learning via In-Context Learning/8959c2e4-7cb5-4574-88e7-fc386607e7ca_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Meta-Learning via In-Context Learning/8959c2e4-7cb5-4574-88e7-fc386607e7ca_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5ac54b7f85907c2d30df79f8b843669b056e7fd3351c48fd4031d17096886fc
3
+ size 2244116
2025/Unsupervised Meta-Learning via In-Context Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Meta-Learning via In-Context Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4307765f2c859e2474a9841623d83958ac6653a53815a18dd54815b26ae871f0
3
+ size 911964
2025/Unsupervised Meta-Learning via In-Context Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Unsupervised Model Tree Heritage Recovery/4c7689e4-624c-40ce-8d7a-0e434a5663b9_content_list.json ADDED
The diff for this file is too large to render. See raw diff