text
string
source
string
[2]Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. [3]Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In Proceedings of the International Conference on Machine Learning , pages 2790–2799, 2019. [4]Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge J. Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In Proceedings of the European Conference on Computer Vision , pages 709–727, 2022. [5]Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 139–149, 2022. [6]James Seale Smith, Leonid Karlinsky, Vyshnavi Gutta, Paola Cascante-Bonilla, Donghyun Kim, Assaf Arbelle, Rameswar Panda, Rogerio Feris, and Zsolt Kira. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11909–11919, 2023. [7]Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, and Jian Zhang. A unified continual learning framework with general parameter-efficient tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 11449–11459, 2023. [8]Yan-Shuo Liang and Wu-Jun Li. Inflora: Interference-free low-rank adaptation for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 23638–23647, 2024. [9]Yue Lu, Shizhou Zhang, De Cheng, Yinghui Xing, Nannan Wang, Peng Wang, and Yan- ning Zhang. Visual prompt tuning in null space for continual learning. arXiv preprint arXiv:2406.05658 , 2024. [10] Shuyang Jiang, Yusheng Liao, Ya Zhang, Yanfeng Wang, and Yu Wang. Taia: Large language models are out-of-distribution data learners. arXiv preprint arXiv:2405.20192 , 2024. [11] Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. Knowledge neurons in pretrained transformers. arXiv preprint arXiv:2104.08696 , 2021. [12] Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913 , 2020. [13] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen- tau Yih, editors, Proceedings of the Conference on Empirical Methods in Natural Language Processing , pages 3045–3059, 2021. [14] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. InProceedings of the Annual Meeting of the Association for Computational Linguistics , pages 4582–4597, 2021. [15] Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Short Papers) , pages 1–9, 2022. 10 [16] Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, and Hung-Yi Lee. Adapterbias: Parameter-efficient token-dependent representation shift for adapters in nlp tasks. In Findings of the Association for Computational Linguistics , pages 2608–2621, 2022. [17] Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. Compacter: Efficient low-rank hypercomplex adapter layers. In Advances in
https://arxiv.org/abs/2505.22370v1
Neural Information Processing Systems , pages 1022–1035, 2021. [18] Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. Advances in Neural Information Processing Systems , pages 16664–16678, 2022. [19] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In ICML , pages 3987–3995, 2017. [20] Sangwon Jung, Hongjoon Ahn, Sungmin Cha, and Taesup Moon. Continual learning with node-importance based adaptive group sparse regularization. Advances in Neural Information Processing Systems , pages 3647–3658, 2020. [21] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuyte- laars. Memory aware synapses: Learning what (not) to forget. In ECCV , pages 139–154, 2018. [22] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. PNAS , 114(13):3521–3526, 2017. [23] Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, and Lucas Page-Caccia. Online continual learning with maximal interfered retrieval. In Advances in Neural Information Processing Systems , pages 11849–11860, 2019. [24] Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems , pages 11816–11825, 2019. [25] Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, and Liang Wan. Exploring example influence in continual learning. Advances in Neural Information Processing Systems , pages 27075–27086, 2022. [26] Yan-Shuo Liang and Wu-Jun Li. Loss decoupling for task-agnostic continual learning. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems , 2023. [27] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671 , 2016. [28] Steven C. Y . Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen. Compacting, picking and growing for unforgetting continual learning. In Advances in Neural Information Processing Systems , pages 13647–13657, 2019. [29] Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In Proceedings of the International Conference on Machine Learning , pages 3925–3934, 2019. [30] Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. Continual learning of context-dependent processing in neural networks. Nature Machine Intelligence , 1(8):364–372, 2019. [31] Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. Orthogonal gradient descent for continual learning. In AISTATS , pages 3762–3773, 2020. [32] Gobinda Saha, Isha Garg, and Kaushik Roy. Gradient projection memory for continual learning. InICLR , 2021. 11 [33] Sen Lin, Li Yang, Deliang Fan, and Junshan Zhang. Trgp: Trust region gradient projection for continual learning. arXiv preprint arXiv:2202.02931 , 2022. [34] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 16000–16009, 2022. [35] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,
https://arxiv.org/abs/2505.22370v1
Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations , 2021. [36] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics , pages 4171–4186, 2019. [37] Matteo Boschini, Lorenzo Bonicelli, Angelo Porrello, Giovanni Bellitto, Matteo Pennisi, Si- mone Palazzo, Concetto Spampinato, and Simone Calderara. Transfer without forgetting. In Proceedings of the European Conference on Computer Vision , pages 692–709, 2022. [38] Zangwei Zheng, Mingyuan Ma, Kai Wang, Ziheng Qin, Xiangyu Yue, and Yang You. Preventing zero-shot transfer degradation in continual learning of vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 19068–19079, 2023. [39] Muhammad Gul Zain Ali Khan, Muhammad Ferjad Naeem, Luc Van Gool, Didier Stricker, Federico Tombari, and Muhammad Zeshan Afzal. Introducing language guidance in prompt- based continual learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 11463–11473, 2023. [40] Hao Yu, Xin Yang, Xin Gao, Yan Kang, Hao Wang, Junbo Zhang, and Tianrui Li. Personalized federated continual learning via multi-granularity prompt. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 4023–4034, 2024. [41] Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152 , 2023. [42] Xisen Jin, Arka Sadhu, Junyi Du, and Xiang Ren. Gradient-based editing of memory examples for online task-free continual learning. NeurIPS , 34:29193–29205, 2021. [43] Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, and Bo Li. Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning. arXiv preprint arXiv:2308.03303 , 2023. [44] Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, et al. Dualprompt: Complementary prompting for rehearsal-free continual learning. In ECCV , pages 631–648, 2022. [45] Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, and Jun Zhu. Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. arXiv preprint arXiv:2310.07234 , 2023. [46] Muhammad Rifki Kurniawan, Xiang Song, Zhiheng Ma, Yuhang He, Yihong Gong, Yang Qi, and Xing Wei. Evolving parameterized prompt memory for continual learning. In AAAI , volume 38, pages 13301–13309, 2024. [47] Li Jiao, Qiuxia Lai, Yu Li, and Qiang Xu. Vector quantization prompting for continual learning. arXiv preprint arXiv:2410.20444 , 2024. [48] Anonymous. S-lora: Scalable low-rank adaptation for class incremental learning. Under review at International Conference on Learning Representations (ICLR) , 2025. 12 [49] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 8340–8349, 2021. [50] A Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of
https://arxiv.org/abs/2505.22370v1
Tront , 2009. [51] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 1406–1415, 2019. [52] James Seale Smith, Yen-Chang Hsu, Lingyu Zhang, Ting Hua, Zsolt Kira, Yilin Shen, and Hongxia Jin. Continual diffusion: Continual customization of text-to-image diffusion with c-lora. archiveEprint: 2304.06027 , 2024. [53] Gengwei Zhang, Liyuan Wang, Guoliang Kang, Ling Chen, and Yunchao Wei. Slca: Slow learner with classifier alignment for continual learning on a pre-trained model. In ICCV , 2023. [54] Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses. In NeurIPS , 2021. [55] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR , 2018. [56] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. Image bert pre-training with online tokenizer. In ICLR , 2022. [57] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 9650–9660, 2021. [58] Da-Wei Zhou, Hai-Long Sun, Jingyi Ning, Han-Jia Ye, and De-Chuan Zhan. Continual learning with pre-trained models: A survey. arXiv preprint arXiv:2401.16386 , 2024. 13 A Appendix A.1 Proof of Proposition 4.1 Before proving Proposition 4.1, we first establish a supporting lemma. Lemma A.1 (Gradient Preservation under Orthogonal Updates) .LetLj:Rd→Rbe a twice- differentiable loss function corresponding to task j, and let Wjbe the model parameters after completing task j. Suppose at step t > j , the update direction ˜gtfor task tsatisfies ⟨∇Lj(Wj),˜gt⟩= 0.The updated parameter is given by: Wt=Wj−η˜gt.Further assume that the second-order termηHj˜gtin the Taylor expansion of ∇Ljcan be ignored. Then, the gradient of task jremains unchanged: ∇Lj(Wt) =∇Lj(Wj). Proof. Since Ljis twice-differentiable, we apply the first-order Taylor expansion of the gradient at point Wjin the direction of ˜gt: ∇Lj(Wt) =∇Lj(Wj−η˜gt) =∇Lj(Wj)−ηHj˜gt+o(η). Now, under the assumption that ηHj˜gtis negligible (i.e., small learning rate and low curvature), we ignore the second-order term: ∇Lj(Wt) =∇Lj(Wj). Based on the Lemma A.1, we denote ∇Li(Wj)asGi. Next, we provide the proof of Proposition 4.1. Proposition 4.1. Assume the loss Li(W)isL-smooth for all i∈ {1, . . . , t}. Let the model update beWt=Wt−1+ ∆Wt. Then: tX i=1(Li(Wt)− Li(Wt−1))≤ −(t−1) ∆Wt,Gold t − ⟨∆Wt,Gt⟩+(t−1)L 2∥∆Wt∥2 F. Proof. ByL-smoothness of each Li, we have: Li(Wt)≤ Li(Wt−1) +⟨∇L i(Wt−1),∆Wt⟩+L 2∥∆Wt∥2 F. Summing over i= 1tot: tX i=1Li(Wt)−tX i=1Li(Wt−1)≤tX i=1⟨∇L i(Wt−1),∆Wt⟩+tL 2∥∆Wt∥2 F. LetGold t=1 t−1Pt−1 i=1∇Li(Wt−1), then: t−1X i=1⟨∇L i(Wt−1),∆Wt⟩= (t−1)⟨Gold t,∆Wt⟩. Substituting back gives: tX i=1(Li(Wt)− Li(Wt−1))≤ −(t−1)⟨∆Wt,Gold t⟩ − ⟨∆Wt,Gt⟩+(t−1)L 2∥∆Wt∥2 F. 14 A.2 Proof of Theorem 4.2 LetWt−1denote the weight matrix of a linear layer in the model, updated as Wt=Wt−1+∆ˆWt= Wt−1+UktUk⊤ t∆Wt. Since the update direction of the new task is unknown, we assume that it is uniformly distributed across all directions. that is to say, ∆Wthas the same expected projection value across different feature directions of Gt, we provide the expected values of the stability loss : E[LS t(Wt)] =−(t−1)ϵt(kt) ∆Wt,Gold
https://arxiv.org/abs/2505.22370v1
t , (18) and the plasticity loss: E[LP t(Wt)] =−kt d⟨∆Wt,Gt⟩. (19) Proof. Stability Loss: By definition, the projected update is: ∆ˆWt=Uk tUk⊤ t∆Wt. Thus, the expected stability loss is: E[LS t] =−(t−1)E[⟨∆ˆWt,Gold t⟩] =−(t−1)E[⟨Uk tUk⊤ t∆Wt,Gold t⟩] =−(t−1)E[Tr(∆W⊤ tUk tUk⊤ tGold t)]. LetGold t=Pd i=1σiuiv⊤ ibe the SVD. Then, Uk tUk⊤ tGold t=dX i=d−kt+1σiuiv⊤ i. So: E[LS t] =−(t−1)dX i=d−kt+1σi·E[⟨∆Wt,uiv⊤ i⟩F]. Under the uniform distribution assumption, all expected projections are equal: E[⟨∆Wt,uiv⊤ i⟩F] =c,∀i. Then: E[LS t] =−(t−1)·c·dX i=d−kt+1σi. Also, ⟨∆Wt,Gold t⟩=dX i=1σi· ⟨∆Wt,uiv⊤ i⟩F=c·dX i=1σi, so: c=⟨∆Wt,Gold t⟩ Pd i=1σi. Thus, E[LS t] =−(t−1)·ϵt(kt)· ⟨∆Wt,Gold t⟩. — Plasticity Loss: 15 The plasticity loss is: E[LP t] =−E[⟨∆ˆWt,Gt⟩] =−E[⟨Uk tUk⊤ t∆Wt,Gt⟩]. Letαi=⟨∆Wt,ui⟩,βi=⟨Gt,ui⟩. Then: E[LP t] =−E"ktX i=1αiβi# . Under the uniform assumption, the expected contribution over any direction is1 d, hence: E[LP t] =−kt d· ⟨∆Wt,Gt⟩. A.3 Evaluation metrics To evaluate continual learning performance, we track the average classification accuracy over all classes encountered so far at the end of each task’s training following [ 47]. We denote by Aijthe average accuracy on the i-th task after training the j-th task. Below, we provide formal definitions for two key metrics: FAA and CAA. (i) Final Average Accuracy (FAA). FAA measures the overall performance after learning all tasks, defined as: FAA =1 TTX i=1AiT, (20) where Tis the total number of tasks and AiTis the accuracy for task iafter completing task T. A larger FAA indicates a stronger ability to learn while minimizing forgetting. In some literature, FAA is also referred to as “Last-Acc.” (ii) Cumulative Average Accuracy (CAA). CAA is the average of the FAA values computed after each task is learned, given by: CAA =1 TTX j=11 jjX i=1Aij. (21) It captures the overall performance at every incremental step. This metric is sometimes referred to as “Inc-Acc.” A.4 The size of minor subspace evolves during training. We tracked the evolution of the minor subspace size throughout training on ImageNet-R with 20 tasks. As shown in the figure 4, as the number of tasks increases, model stability becomes more critical, leading to a progressively smaller minor subspace. Furthermore, when comparing different layers of ViT, the minor subspace is larger in shallower layers and gradually decreases as the layer depth increases. This suggests that changes in the deep-layer parameters have a greater impact on model stability. When the value of αvaries, the model’s learning space changes significantly; nevertheless, the model is still able to learn tasks effectively in all cases. Here, we introduce a simple experiment to explain it. As Tab. 7 shows, "Only head" means training only the classifier head for each task; "Only head and the first task" means training the first task’s LoRA and the classifiers for all tasks. SplitLoRA is used for comparison. It can be observed that even when only the classifier is trained, the model can still learn effectively, and when subsequent tasks are fine-tuned on the first task’s LoRA, performance improves. The knowledge contained in the pre-trained model and the beneficial knowledge from old tasks help the new task’s learning. 16 Table 7: Average accuracy
https://arxiv.org/abs/2505.22370v1
of tasks 2–20 under different fine-tuning strategies. Method Avg. Acc (Tasks 2–20) Only head 66.08 Only head and the first task 74.75 SplitLoRA 81.47 /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni0000004f/uni00000044/uni0000005c/uni00000048/uni00000055/uni00000003/uni00000013/uni00000018/uni0000001a/uni00000018/uni00000019/uni00000013/uni00000013/uni00000019/uni00000015/uni00000018/uni00000019/uni00000018/uni00000013/uni00000019/uni0000001a/uni00000018/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000015/uni00000018/uni0000001a/uni00000018/uni00000013/uni0000001a/uni0000001a/uni00000018/uni00000056/uni0000004c/uni0000005d/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni0000004c/uni00000051/uni00000052/uni00000055/uni00000003/uni00000056/uni00000058/uni00000045/uni00000056/uni00000053/uni00000044/uni00000046/uni00000048/uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000016/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000015/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000018/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni0000000c /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni0000004f/uni00000044/uni0000005c/uni00000048/uni00000055/uni00000003/uni00000015/uni00000016/uni00000013/uni00000013/uni00000016/uni00000019/uni00000013/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000013/uni00000018/uni00000017/uni00000013/uni00000019/uni00000013/uni00000013/uni00000019/uni00000019/uni00000013/uni0000001a/uni00000015/uni00000013/uni0000001a/uni0000001b/uni00000013/uni00000056/uni0000004c/uni0000005d/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni0000004c/uni00000051/uni00000052/uni00000055/uni00000003/uni00000056/uni00000058/uni00000045/uni00000056/uni00000053/uni00000044/uni00000046/uni00000048/uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000016/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000015/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000018/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni0000000c /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni0000004f/uni00000044/uni0000005c/uni00000048/uni00000055/uni00000003/uni00000017/uni00000013/uni00000014/uni00000013/uni00000013/uni00000015/uni00000013/uni00000013/uni00000016/uni00000013/uni00000013/uni00000017/uni00000013/uni00000013/uni00000018/uni00000013/uni00000013/uni00000019/uni00000013/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001b/uni00000013/uni00000013/uni00000056/uni0000004c/uni0000005d/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni0000004c/uni00000051/uni00000052/uni00000055/uni00000003/uni00000056/uni00000058/uni00000045/uni00000056/uni00000053/uni00000044/uni00000046/uni00000048/uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000016/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000015/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000018/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni0000000c /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni0000004f/uni00000044/uni0000005c/uni00000048/uni00000055/uni00000003/uni00000019/uni00000013/uni00000014/uni00000013/uni00000013/uni00000015/uni00000013/uni00000013/uni00000016/uni00000013/uni00000013/uni00000017/uni00000013/uni00000013/uni00000018/uni00000013/uni00000013/uni00000019/uni00000013/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001b/uni00000013/uni00000013/uni00000056/uni0000004c/uni0000005d/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni0000004c/uni00000051/uni00000052/uni00000055/uni00000003/uni00000056/uni00000058/uni00000045/uni00000056/uni00000053/uni00000044/uni00000046/uni00000048/uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000016/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000015/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000018/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni0000000c /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni0000004f/uni00000044/uni0000005c/uni00000048/uni00000055/uni00000003/uni0000001c/uni0000001b/uni00000013/uni00000014/uni00000019/uni00000013/uni00000015/uni00000017/uni00000013/uni00000016/uni00000015/uni00000013/uni00000017/uni00000013/uni00000013/uni00000017/uni0000001b/uni00000013/uni00000018/uni00000019/uni00000013/uni00000019/uni00000017/uni00000013/uni0000001a/uni00000015/uni00000013/uni0000001b/uni00000013/uni00000013/uni00000056/uni0000004c/uni0000005d/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni0000004c/uni00000051/uni00000052/uni00000055/uni00000003/uni00000056/uni00000058/uni00000045/uni00000056/uni00000053/uni00000044/uni00000046/uni00000048/uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000016/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000015/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000018/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni0000000c /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni0000004f/uni00000044/uni0000005c/uni00000048/uni00000055/uni00000003/uni00000014/uni00000014/uni0000001b/uni00000013/uni00000014/uni00000019/uni00000013/uni00000015/uni00000017/uni00000013/uni00000016/uni00000015/uni00000013/uni00000017/uni00000013/uni00000013/uni00000017/uni0000001b/uni00000013/uni00000018/uni00000019/uni00000013/uni00000019/uni00000017/uni00000013/uni0000001a/uni00000015/uni00000013/uni0000001b/uni00000013/uni00000013/uni00000056/uni0000004c/uni0000005d/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni0000004c/uni00000051/uni00000052/uni00000055/uni00000003/uni00000056/uni00000058/uni00000045/uni00000056/uni00000053/uni00000044/uni00000046/uni00000048/uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000016/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000015/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni00000013/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000018/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024/uni0000000b /uni00000020/uni00000014/uni0000000c Figure 4: We recorded the evolution of the minor subspace size during training on ImageNet-R with 20 tasks. A.5 More results on other benchmark. We further evaluate the performance of SplitLoRA on another benchmark [ 58]. Tab. 8 compares SplitLoRA with state-of-the-art continual learning methods on seven benchmarks. SplitLoRA achieves the best average performance (79.95%) and consistently ranks top in individual tasks. In particular, it excels on ImageNet-R, Omni, and VTAB, demonstrating strong generalization and knowledge retention. This confirms that SplitLoRA effectively balances stability and plasticity across diverse scenarios. 17 Table 8: Comparison with state-of-the-art methods on multiple benchmarks. We report CAA and FAA (%) on base and incremental sessions. Method CIFAR B0 Inc5 CUB B0 Inc10 IN-R B0 Inc5 IN-A B0 Inc20 Obj B0 Inc10 Omni B0 Inc30 VTAB B0 Inc10 Average L2P 85.94 / 79.93 67.05 / 56.25 66.53 / 59.22 49.39 / 41.71 63.78 / 52.19 73.36 / 64.69 77.11 / 77.10 65.30 DualPrompt 87.87 / 81.15 77.47 / 66.54 63.31 / 55.22 53.71 / 41.67 59.27 / 49.33 73.92 / 65.52 83.36 / 81.23 67.11 CODA-Prompt 89.11 / 81.96 84.00 / 73.37 64.42 / 55.08 53.54 / 42.73 66.07 / 53.29 77.03 / 68.09 83.90 / 83.02 69.68 DAP 94.54 / 90.62 94.76 / 94.63 80.61 / 74.76 54.39 / 46.32 72.08 / 59.51 86.44 / 80.65 84.65 / 84.64 78.47 DAP w/o BI 68.07 / 58.16 65.27 / 52.05 50.40 / 37.99 34.48 / 21.84 50.47 / 37.55 65.43 / 52.53 79.63 / 79.87 53.83 SimpleCIL 87.57 / 81.26 92.20 / 86.73 62.58 / 54.55 59.77 / 48.91 65.45 / 53.59 79.34 / 73.15 85.99 / 84.38 72.53 ADAM + VPT-D 88.46 / 82.17 91.02 / 84.99 68.79 / 60.48 58.48 / 48.52 67.83 / 54.65 81.05 / 74.47 86.59 / 83.06 73.61 ADAM + SSF 87.78 / 81.98 91.72 / 86.13 68.94 / 60.60 61.30 / 50.03 69.15 / 56.64 80.53 / 74.00 85.66 / 81.92 74.02 ADAM + Adapter 90.65 / 85.15 92.21 / 86.73 72.35 / 64.33 60.47 / 49.37 67.18 / 55.24 80.75 / 74.37 85.95 / 84.35 74.93 RanPAC 93.51 / 89.30 93.13 / 89.40 75.74 / 68.75 64.16 / 52.86 71.67 / 60.08 85.95 / 79.55 92.56 / 91.83 79.17 EASE 91.51 / 85.80 92.23 / 86.81 78.31 / 70.58 65.34 / 55.04 70.84 /
https://arxiv.org/abs/2505.22370v1
arXiv:2505.22384v1 [cs.DS] 28 May 2025Exact Algorithms and Lower Bounds for Forming Coalitions of Constrained Maximum Size Foivos Fioravantesa, Harmender Gahlawatb,c, Nikolaos Melissinosd aDepartment of Theoretical Computer Science, Faculty of Information Technology, Czech Technical University in Prague, Prague, Czech Republic bUniversité Clermont Auvergne, CNRS, Clermont Auvergne INP, Mines Saint-Étienne, LIMOS, Clermont-Ferrand, 63000, France cG-SCOP, Grenoble-INP, Grenoble, 38000, France dComputer Science Institute, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic Abstract Imaginewewanttosplitagroupofagentsintoteamsinthemost efficient way, considering thateachagenthastheirownpreferencesabouttheirteammates. Thisscenarioismodeled by the extensively studied Coalition Formation problem. Here, we study a version of this problem where each team must additionally be of bounded size. We conduct a systematic algorithmic study, providing several intractability results as well as multiple exact algorithms that scale well as the input grows (FPT), which could prove useful in practice. Our main contribution is an algorithm that deals efficiently with tree-like structures (bounded treewidth ) for “small” teams. We complement this result by proving that our algorithm is asymptotically optimal. Particularly, there can be no algorithm that vastly outperforms the one we present, under reasonable theoretical assumptions, even when considering star-like structures (bounded vertex cover number ). 1. Introduction Coalition Formation is a central topic in Computational Social Choice and economic game theory [15]. The goal is to partition a set of agents into coalitions to optimize some utilityfunction. One well-studied notion in Coalition Formation is Hedonic Games [27], where the utility of an agent depends solely on the coalition it is placed in. Due to their extremely general nature that captures numerous scenarios, hedonic games are intensively studied in computer science [2, 6, 12, 14, 17, 29, 44, 58, 64], and are shown to have applications in social network analysis [59], scheduling group activities [21], and allocating tasks to wireless agents [63]. Due to its general nature, most problems concerning the computational complexity of hedonic games are hard [62]. In fact, even encoding the preferences of agents, in general, A preliminary version of this paper appeared in the proceedings of the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025) [31]. takes exponential space, which motivates the study of succinct representations for agent preferences. One of the most-studied such class of games is Additive Separable Hedonic Games [13], where the agents are represented by the vertices of a weighted graph and the weight of each edge represents the utilityof the agents joined by the edge for each other (see also Weighted Graphical Games model of [22]). Variants where the agent preferences are asymmetric are modeled using directed graphs. Here, the utility of an agent for a group of agents is additive in nature. Additive Separable Hedonic Games are well-studied in the literature [1, 3, 7]. Most literature in the Additive Separable Hedonic Games considers the agents to be selfishin nature and, hence, the notion used to measure the efficiency is that of stabil- ity[62], including core stability ,Nash Stability ,individual stability , etc. Semi-altruistic approaches where the agents are concerned about their relative’s utility along with theirs are also studied [55]. A standard altruistic approach in computational social choice is
https://arxiv.org/abs/2505.22384v1
that ofutilitarian social welfare , where the goal is to maximize the total sum of utility of all the agents. Observe that if all edge weights are positive, then the maximum utilitarian utility is achieved by putting all agents in the same coalition. But there are many practical scenarios, e.g., forming office teams to allocate several projects or allocating cars/buses to people for a trip, where we additionally require that each coalition should be of a bounded size. Coalition formations with constrained coalition size have recently been a focus of attention in ASHGs [51] and in Fractional Hedonic Games [54]. Further, coalition formations where each coalition needs to be of a fixed size have also been studied [9, 19]. We consider the Additive Separable Hedonic Games with an additional constraint on the maximum allowed size of a coalition (denoted by C), with the goal to maximize the total sum of utility of all the agents. We denote this as the C-Coalition Formation problem ( C-Coalition Formation for short). We provide the formal problem defini- tion, along with other preliminaries, in Section 2. This game is shown to be NP-hard even when C= 3[51] (and hence W-hard parameterized by C) via a straightforward re- duction from the Partition Into Triangles , which is NP-hard even for graphs with ∆≤4[65]. Therefore, we consider the parameterized complexity of this problem through the lens of various structural parameters of the input graph and present a comprehensive analysis of its computational complexity. In parameterized complexity, the goal is to restrict the exponential blow-up of running time to some parameter of the input (which is usually much smaller than the input size) rather than the whole input size. Due to its practical efficiency, this paradigm has been used extensively to study problems arising from Computational Social Choice and Artificial Intelligence [5, 8, 16] (including hedonic games [36, 42, 61, 60, 18, 52, 40, 41]). It is worth mentioning that C-Coalition Formation has been studied from an approximation perspective and is shown to have applications in Path Transversals [49]. Moreover, [4] considered a Weighted Graphical Game to maximize social welfare and provided constant-factor approximation for restricted families of graphs. Finally, [32] considered the online version of several Weighted Graphical Games (aiming to maximize utilitarian social welfare), in one of which the authors also consider coalitions of bounded size. 2 vertex cover vcO(vc)nO(1)←optimaltwin-cover + ωvertex integritytree-depthtreewidth treewidth + C (twC)O(tw)nO(1)←optimal twin-cover W-hard ↑ FPT ↓Figure 1: Overview of our results. A parameter Aappearing linked to a parameter Bwith Abeing below Bis to be understood as “there is a function fsuch that f(A)≥f(B)”. In blue (red resp.) we exhibit the FPT ( W[1]-hardness resp.) results we provide. The clique number of the graph is denoted by ω. Note that our FPT results are for the more general, weighted version of the problem ( C-CFw), while ourW[1]-hardness results are for the more restricted, unweighted version of the problem ( C-CF). Finally, note that this figure does not include our results concerning the kernelization of C-CF. Our contribution In this paper we study
https://arxiv.org/abs/2505.22384v1
the parameterized complexity of C-Coalition Formation problem, which is a version of the Coalition Formation problem with the added constraint that each coalition should be of size at most C. We consider two distinct variants of this problem according to the possibilities for the utilities of the agents. In the unweighted version, the utilities of all the pairs of agents are either 0(there is no edges connecting them) or 1. In the weighted version, the utilities of all pairs of agents are given by natural numbers. We will refer to the former as C-Coalition Formation and the latter as Weighted C-Coalition Formation , respectively. In both cases, the underlying structure is assumed to be an undirected graph. In particular, this implies that the valuation are assumed to be symmetric . We begin by noting an interesting connection to the notion of Nash-stability . Consider asolution P={C1, ..., C ℓ}. Roughlyspeaking, PisNash-stableifanyagentdoesnothave anyadditional gainby leavingits coalitionand joininganother thatcan still accommodate it. In our setting, a solution is Nash-stable if for each agent u, ifu∈Ci, then for each j∈ [ℓ]such that |Cj|< ℓ,P x∈NCiw(ux)≥P y∈NCiw(uy). Observe that an optimal solution forC-Coalition Formation (similarly, Weighted C-Coalition Formation ) is also Nash-stable . This is trivially true because if this condition is not satisfied for some u∈Ci, then we can move utoCjwhile respecting the maximum size constraint and increase the valuation. Moreover, the notion of utilitarian social welfare captures the notion of Nash stability when the coalitions are required to be of bounded size and the valuations are symmetric. Hence, ourpositiveresultsstandevenwhenthegoalistoobtainaNash-stable coalition. It follows from our previous discussion on the hardness of the C-Coalition Forma- 3 tionproblem that it is para- NP-hard parameterized by C+ ∆. Thus, we consider other structural parameters of the input graph. The majority of our new results are summa- rized in Figure 1. We initiate our study with arguably the most studied structural graph parameter treewidth , denoted by tw, which measures how tree-like a graph is. Treewidth is a natural parameter of choice when considering problems admitting a graph structure, specifically in problems related to AI, because many real world networks exhibit bounded treewidth [53] and tree decompositions of “almost optimal” treewidth are easy to compute in FPT time [48]. We begin with showing that Weighted C-Coalition Formation is FPT when parameterized by tw+Cby application of a bottom-up dynamic programming. Theorem 1.1. TheWeighted C-Coalition Formation problem can be solved in time (twC)O(tw)nO(1), wheretwis the treewidth of the input graph. The complexity in the above algorithm has a non-polynomial dependency on C. It is natural to wonder whether there can be an efficient algorithm that avoids this, i.e., ifWeighted C-Coalition Formation (or even C-Coalition Formation ) is FPT parameterized by tw. We answer this question negatively in the following theorem by establishing that even C-Coalition Formation remains W[1]-hard when parameterized bytreedepth, amorerestrictiveparameter, justifyingourchoiceofparametersforourFPT algorithm. In particular, we have the following theorem. Theorem 1.2. TheC-Coalition Formation problem is W[1]-hard when parameterized by the tree-depth of the input graph. Nevertheless, we do achieve such an algorithm (having only polynomial dependence onC) by allowing
https://arxiv.org/abs/2505.22384v1
the input to have a star-like structure. In the following statements, vc denotes the vertex cover number of the input graph. Theorem 1.3. TheWeighted C-Coalition Formation problem can be solved in time vcO(vc)nO(1), wherevcdenotes the vertex cover number of the input graph. The next question we consider is whether the running time of our algorithms can be improved. In this regard, we prove that both of the above algorithms are, essentially, optimal, i.e., we do not expect a drastic improvement in their running times even if we restrict ourselves to the unweighted case. Theorem 1.4. Unless the ETH fails, C-Coalition Formation does not admit an al- gorithm running in time (Cvc)o(vc+C)nO(1), wherevcdenotes the vertex cover number of the input graph. We then slightly shift our approach and attack this problem using the toolkit of ker- nelization . Intuitively, our goal is to “peel off” the useless parts of the input (in polyno- mial time) and solve the problem for the “small” part of the input remaining, known as thekernel. Due to its profound impact, kernelization was termed “the lost continent of polynomial time” [30]. It is specifically useful in practical applications as it has shown tremendous speedups in practice [37, 38, 57, 66]. We begin with providing a polynomial kernel for the unweighted case when parameterized by the vertex cover number of the input graph and C. 4 Theorem 1.5. C-Coalition Formation admits a kernel with O(vc2C)vertices, where vcdenotes the vertex cover number of the input graph. It is well known that a problem admits a kernel iff it is FPT [20]. Hence, the notion of “tractability” in kernelization comes from designing polynomial kernels, and a problem is considered “intractable” from the kernelization point of view if it is unlikely to admit a polynomial kernel for the considered parameter. One may wonder if we can lift our kernelization algorithm for the weighted case. We answer this question negatively by proving that, unfortunately, there can be no such kernel for the weighted version. In some sense, this signifies that weights present a barrier for the kernelization of the problems considered here. Theorem 1.6. Weighted C-Coalition Formation parameterized by vc+Cdoes not admit a polynomial kernel, unless polynomial hierarchy collapses, where vcdenotes the vertex cover number of the input graph. We close our study by considering additional structural parameters for the unweighted case. We postpone the formal definition of these parameters until Section 2.4. Theorem 1.7. TheC-Coalition Formation problem can be solved in FPT time when parameterized by the vertex integrity of the input graph. Theorem 1.8. TheC-Coalition Formation problem is W[1]-hard when parameterized by the twin-cover number of G. The choice to focus our attention to the above two parameters is not arbitrary. Let Gbe a graph with vertex integrity vi, twin-cover number twc and vertex cover number vc. Then, vi ≤twc+ω(G)and twc ≤twc+ω(G), where ω(G)is the clique number of G. Finally, twc +ω(G)≤f(vc), for some computable function f. Taking the above into consideration, our Theorems 1.7 and 1.8 provide a clear dichotomy of the tractability of C-Coalition Formation when considering these parameters. 2.
https://arxiv.org/abs/2505.22384v1
Preliminaries 2.1. Graph Theory We follow standard graph-theoretic notation [23]. In particular, we will use V(G) andE(G)to refer to the vertices and edges of Grespectively; if no ambiguity arises, the parenthesis referring to Gwill be dropped. Moreover, we denote by NG(v)theneighbors ofvinGand we use dG(v)to denote the degreeofvinG. That is, NG(v) ={u∈ V(G)|uv∈E(G)}anddG(v) =|NG(u)|. Note that the subscripts may be dropped if they are clearly implied by the context. The maximum degree ofGis denoted by ∆(G), or simply ∆when clear by the context. Given a graph G= (V, E)and a set E′⊆E, we useG−E′to denote the graph resulting from the deletion of the edges of E′from G. Finally, for any integer and n, we denote [n]the set of all integers between 1andn. That is,[n] ={1, . . . , n }. 5 v1v2 v3 v4v5 v6 (a) The input graph G.v1v2 v3 v4v5 v6 (b)P={{v1, v2, v3},{v4, v5, v6}}. v(P) = 6 .v1v2 v3 v4v5 v6 (c)P={{v1, v3, v6},{v2, v5, v4}}. v(P) = 5 . v1v2 v3 v4v5 v6 (d)P={{v1, v2, v3, v5},{v4, v6}}.v(P) = 5 .v1v2 v3 v4v5 v6 (e)P={{v1},{v2, v3, v5, v6},{v4}}.v(P) = 4 . Figure 2: An example of possible solutions to the unweighted version of the C-Coalition Formation problem. The input consists of the graph Gillustrated in subfigure (a), and the capacity C= 4. The 4-partition in subfigure (b) has the optimal value of 6. Observe that every possible 4-partition that includes a set of 4vertices (some of which are not included here) is suboptimal. 2.2. Problem Formulation Formally, the input of the Weighted C-Coalition Formation consists of a graph G= (V, E)and an edge-weight function w:E→N. Additionally, we are given a capacity C ∈Nas part of the input. Our goal is to find a C-partition ofV, that is, a partition P={C1, . . . , C p}such that |Ci| ≤ Cfor each i∈[p]. For each i∈[p], let Eidenote the edges of G[Ci]. Let E(P)be the set of edges of the partition P,i.e., E(P) =Sp i=1E(G[Ci]). Thevalueof aC-partition Pis:v(P) =Pp i=1P e∈E(Ci)w(e). We are interested in computing an optimal C-partition, i.e., aC-partition of maximum value. Note that we will also use the defined notations for general (not necessarily C-)partitions. Wearealsointerestedinthe unweighted versionofthe C-Coalition Formation prob- lem, where each edge of the input graph has a weight of 1; in such cases, the input of the problem will only consist of the graph and the required capacity. In Figure 2 we illustrate an example for this unweighted version. 2.3. Parameterized Complexity - Kernelization Parameterized complexity isacomputationalparadigmthatextendsclassicalmeasures oftimecomplexity. Thegoalistoexaminethecomputationalcomplexityofproblemswith respect to an additional measure, referred to as the parameter . Formally, a parameterized problem is a set of instances (x, k)∈Σ∗×N, where kis called the parameter of the instance. A parameterized problem is Fixed-Parameter Tractable (FPT) if it can be solved in f(k)|x|O(1)time for an arbitrary computable function f:N→N. According to standard complexity-theoretic assumptions, a problem is not in FPT if it is shown to be W[1]-hard. This is achieved through a parameterized reduction from another W[1]-hard 6 problem, a reduction, achieved in polynomial time, that also guarantees that the
https://arxiv.org/abs/2505.22384v1
size of the considered parameter is preserved. Akernelization algorithm is a polynomial-time algorithm that takes as input an in- stance (I, k)of a problem and outputs an equivalent instance (I′, k′)of the same problem such that the size of (I′, k′)is bounded by some computable function f(k). The problem is said to admit an f(k)sized kernel, and if f(k)is polynomial, then the problem is said to admit a polynomial kernel. It is known that a problem is FPT if and only if it admits a kernel. Finally, the lower bounds we present are based on the so-called Exponential Time Hypothesis (ETH for short) [45], a weaker version of which states that 3-SATcannot be solved in time 2o(n+m), for nandmbeing the number of variables and clauses of the input formula respectively. We refer the interested reader to classical monographs [20, 56, 33, 24, 34] for a more comprehensive introduction to this topic. 2.4. Structural parameters LetG= (V, E)be a graph. A set U⊆Vis avertex cover ofGif for every edge e∈E it holds that U∩e̸=∅. Thevertex cover number ofG, denoted vc (G), is the minimum size of a vertex cover of G. Atree-decomposition ofGis a pair (T,B), where Tis a tree, Bis a family of sets assigning to each node tofTitsbagBt⊆V, and the following conditions hold: •for every edge {u, v} ∈E(G), there is a node t∈V(T)such that u, v∈Btand •for every vertex v∈V, the set of nodes twith v∈Btinduces a connected subtree ofT. Thewidthof a tree-decomposition (T,B)ismax t∈V(T)|Bt| −1, and the treewidth tw (G) of a graph Gis the minimum width of a tree-decomposition of G. It is well known that computing a tree-decomposition of minimum width is fixed-parameter tractable when parameterized by the treewidth [47, 10], and even more efficient algorithms exist for obtaining near-optimal tree-decompositions [48]. A tree-decomposition (T,B)isniceif every node t∈V(T)is exactly of one of the following four types: Leaf: tis a leaf of Tand|Bt|= 0. Introduce: thas a unique child cand there exists v∈Vsuch that Bt=Bc∪ {v}. Forget: thas a unique child cand there exists v∈Vsuch that Bc=Bt∪ {v}. Join: thas exactly two children c1, c2andBt=Bc1=Bc2. Every graph G= (V, E)admits a nice tree-decomposition that has width equal to tw(G)[11]. Thetree-depth ofGcan be defined recursively: if |V|= 1then Ghas tree-depth 1. Then, Ghas tree-depth kif there exists a vertex v∈Vsuch that every connected component of G[V\ {v}]has tree-depth at most k−1. The graph Ghasvertex integrity kif there exists a set U⊆Vsuch that |U|=k′≤k and all connected components of G[V\U]are of order at most k−k′. We can find such a set in FPT-time parameterized by k[26]. 7 A set Sis atwin-cover [35] of GifVcan be partitioned into the sets S, V 1, . . . , V p, such that for every i∈[p], all the vertices of Viare twins. The size of a minimum twin-cover ofGis thetwin-cover number ofG. LetAandBbe two parameters of the same graph. We will write A≤fBto denote that the parameter Ais upperly bounded by a function of parameter B. Let Gbe a graph with treewidth tw, vertex cover number vc, tree-depth td, twin-cover number twc and
https://arxiv.org/abs/2505.22384v1
vertex integrity vi. We have that that twc ≤fvc. Moreover, tw ≤ftd≤fvi≤fvc, but twc is incomparable to tw. 3. Bounded Tree-width or Vertex Cover Number This section includes both the positive and negative results we provide for graphs of bounded tree-width or bounded vertex cover number. 3.1. FPT Algorithm parameterized by tw+C We begin with provided an FPT algorithm for Weighted C-Coalition Forma- tionparameterized by tw +C, where tw is the treewidth of the input graph. Before we proceed to the main theorem of this section, allow us to briefly comment upon theMax Utilitarian ASHG ( MUfor short). Simply put, MUis a restriction of Weighted C-Coalition Formation where C=n(i.e., the size of the coalitions is unbounded). The authors of [41] provide an FPT algorithm for MUparameterized by the treewidth of the input graph. We stress however that, as MUis a special case ofWeighted C-Coalition Formation , the above FPT algorithm cannot be used to deal with our problem in the general case. We have the following result. Theorem 1.1. TheWeighted C-Coalition Formation problem can be solved in time (twC)O(tw)nO(1), wheretwis the treewidth of the input graph. Proof.As the techniques we are going to use are standard, we are sketching some of the introductory details. For more details on tree decompositions (definition and terminol- ogy), see [25]. Assuming that we have a nice tree decomposition Tof the graph Grooted at a node r, we are going to perform dynamic programming on the nodes of T. For a node tofT, we denote by Btthe bag of this node and by B↓ tthe set of vertices of the graph that appears in the bags of the nodes of the subtree with tas a root. Observe that Bt⊆B↓ t. In order to simplify some parts of the proof, we assume that the C-partitions we look into are allowed to include empty sets. In particular, whenever we consider a C-partition P={C1, . . . , C p}of a graph G[B↓ t], we assume that it is of the following form: •p≥tw+ 1, •for any set Cj∈ P, ifj∈[tw+ 1]then either Cj=∅orCj∩Bt̸=∅and •for any set Cj∈ P, ifj >tw+ 1then Cj̸=∅andCj∩Bt=∅. 8 Note that any C-partition can be made to fit such a form without affecting its value. Also, for any node tof the tree decomposition and any C-partition of G[B↓ t], no more than tw +1 sets of the C-partition can intersect with Bt. Thus, we do not need to store more sets of Pintersecting with Bt. For all nodes tof the tree decomposition, we will create all the C-partitions of G[B↓ t] that are needed in order to find an optimal C-partition; this will be achieved by storing only (twC)O(tw)C-partitions for each bag. In order to decide which C-partitions we need to keep, we first define types of C-partitions of G[B↓ t]based on their intersection with Bt and the size of their sets. In particular, we define a coloring function Col:Bt→[tw+ 1] and a table Sof size |tw+ 1|such that 0≤S[i]≤ Cfor all i∈[tw+ 1]. We will say that aC-partition P={C1, . . . , C
https://arxiv.org/abs/2505.22384v1
p}is oftype(Col, S )tif: •Pis aC-partition of G[B↓ t], •for any i≤tw+ 1andu∈Bt, we have that Col(u) =iif and only if u∈Ci∩Bt and •S[i] =|Ci|for all i∈[tw+ 1]. For any C-partition Pof type (Col, S )t, the function Coldescribes the way that P partitions the set Bt. Also, the table Sgives us the sizes of the sets of Pthat intersect with Bt. Finally, for any node t, aC-partition of type (Col, S )twill be called important if it has value greater or equal to the value of any other C-partition of the same type. Notice that any optimal C-partition of the given graph is also an important C-partition of the root of the tree decomposition. Therefore, to compute an optimal C-partition of G, it suffices to find an important C-partition of maximum value among the all important C-partitions of the root of the given tree decomposition of G. We now present the information we will keep for each node. Let tbe a node of the tree decomposition, Col:Bt→[tw+ 1]be a function and Sbe a table of size |tw+ 1| such that 0≤S[i]≤ Cfor all i∈[tw+ 1]. If there exists an important C-partition of type (Col, S )t, then we store a tuple (Col, S, W, P)fort, where Pis an important C-partition of type (Col, S )tandWis its value. Observe that Wis the value of a partition of the whole subgraph induced by the vertices belonging to B↓ t. We now explain how to deal with each kind of node of the nice tree decomposition. Leaf Nodes. Since the leaf nodes contain no vertices, we do not need to keep any non- trivial coloring. Also, all the positions of the tables Sare equal to 0. Finally, we keep a C-partition P={C1, . . . , C tw+1}where Ci=∅for all i∈[tw+ 1]. Introduce Nodes. Lettbe an introduce node with cbeing its child node and ube the newly introduced vertex. We will use the tuples we have computed for cin order to build one important C-partition for each type of C-partition that exists for t. For each tuple (Col, S, W, P)ofc, we create at most tw + 1tuples for tas follows. For each color i∈[tw+ 1]we consider two cases: either 0≤S[i]<CorS[i] =C. If0≤S[i]<C, then we set Col(u) =i, increase S[i]by one, extend the C-partition Pby adding uinto the setCiand increase WbyP uv∈E,v∈Ciw(uv). IfS[i] =Cthen we cannot color uwith the color ias the corresponding set is already of size C. 9 First, we need to prove that, this way, we create at least one important C-partition fortfor each type of C-partition of G[B↓ t]. Assume that for a type (Col, S )tthere exists an important C- partition P={C1, . . . , C p}ofB↓ t. LetPcbe the C-partition we defined by the restriction of Pon the vertex set B↓ c. That is,Pc={Cc 1, . . . , Cc p}where Cc i=Ci∩B↓ cfor all i∈[p]. Notice that, since cis the child of an introduce node, there exists a k∈[ℓ]such that Cc k=Ck\ {u}andCc i=Cifor all i∈[p]\ {k}. Also, note that Cc kmay be empty. Since Pis
https://arxiv.org/abs/2505.22384v1
aC-partition of G[B↓ t], we have that Pcis aC-partition of G[B↓ c]. Furthermore, let Col′:Bc→[tw+ 1]such that Col′(u) =Col(u)for all u∈BcandS′be a table where S′[i] =S[i]for all i∈[tw+ 1]\k andS′[k] =S[k]−1. Observe that Pcis of type (Col′, S′)c. SincePcis of type (Col′, S′)c, we know that we have stored a tuple (Col′, S′, W′,P′) forc, where P′={C′ 1, . . . , C′ p′}is an important C-partition of G[B↓ c]. Note that P′ is not necessarily the same as Pc, but both of these C-partitions are of the same type. While constructing the tuples of t, at some point the algorithm will consider the tuple (Col′, S′, W′,P′). At this stage, the algorithm will add the vertex uon any set of P′of size at most C−1, creating a different tuple for each option. These options include the set colored by k; let(Colt, St, Wt,Pt)be the corresponding tuple, where Pt={Ct 1, . . . , Ct p′}. Observe that in this case, uis colored k(i.e.Colt(u) =k=Col(u)),S′[k]is increase by one (i.e.St[k] =S′[k] + 1 = S[k]) and uis added to C′ k(i.e.Ct k=C′ k∪{u}). Notice that Col′(v) =Col(v)for all v∈BtandS′[i] =S[i]for all i∈[tw+1]. Therefore, it suffices to show that Ptis also an important C-partition of G[B↓ t]. Indeed, this would indicate that val(P) =val(Pt), since PandPtwould both be important partitions of the same type. On the one hand, we have that: val(P) =val(Pc) +X uv∈E,v∈Ckw(uv) = =val(Pc) +X uv∈E,v∈BtandCol(v)=kw(uv) On the other hand, we have that: val(Pt) =val(P′) +X uv∈E,v∈C′ kw(uv) = =W′+X uv∈E,v∈BtandCol′(v)=kw(uv) Since Col(v) = Col′(v)for all v∈Bt, we have that the two above sums are equal. Therefore we need to compare W′with val (Pc). Note that PcandP′are both C-partitions ofG[B↓ c]ofthesametype. Thus, W′=val(P′)≥val(Pc). Itfollowsthatval (P)≤val(Pt), and since Pis important, we have that val (Pt) =val(P)and that Ptis also important. Forget Nodes. Lettbe an forget node, with cbeing its child node and ube the forgotten vertex. We will use the tuples we have computed for cin order to build one important C-partition for each type of C-partition that exists for t. For each tuple (Col, S, W, P)of cwe create one tuple (Col′, S′, W′,P′)fortas follows. Let Col(u) =i. We consider two 10 cases: either Ci∩Bt=∅or not. In the former, we have that the color idoes not appear on any vertex of Bc\{u}=Bt. Therefore, we are free to reuse this color. To do so, we set S′[i] = 0and we modify P. In particular, if P={C1, . . . , C k}, we create a new C-partition P′={C′ 1. . . , C′ k+1}where C′ j=Cjfor all j∈[k]\ {i},C′ i=∅andC′ k+1=Ci. Also, we define Col′as the restriction of the function Colto the set Bt. Finally, W′=W. In the latter case, it suffices to restrict Colto the set Bt. We keep all the other information the same. We will now prove that, for any type of C-partition of t, if there exists a C-partition of that type, we have created an important C-partition of that type. Assume that for a type (Col, S )tthere exists an important C-partition P={C1, . . . ,
https://arxiv.org/abs/2505.22384v1
C p}ofBtof value W. We consider two cases: either u∈Cℓfor some ℓ≤tw+ 1oru∈Cℓfor some ℓ >tw+ 1. Case 1: u∈Cℓfor some ℓ≤tw+ 1.In this case, Cℓ∩Bt̸=∅. This follows from the assumption that any C-partition P={C1, . . . , C p}we consider is such that for any set Cj∈ P, ifj∈[tw+ 1]then either Cj=∅orCj∩Bt̸=∅and because {v|v∈Bc\ {u}andCol(v) = ℓ} ̸=∅. Let Colc:Bc→[tw+ 1]be such that Colc(u) =ℓandColc(v) =Col(v)for all v∈Bt. Notice that Pis of type (Colc, S)c. Let (Colc, S, W′,P′)be the tuple that is stored in cfor the C-partition P′={C′ 1, . . . , C′ p′}of type (Colc, S)c. At some point while creating the tuples of t, the tuple (Colc, S, W′,P′)was considered. Let(Col′ c, S′, W′,P′)be the tuple that was created at that step. Notice that since Colc is an extension of Colto the set BcandCℓ∩Bt={v∈Bt|Col(v) =ℓ} ∩Bt̸=∅, we have that {v∈Bt|Colc(v) =ℓ} ∩Bt̸=∅. Therefore, {v∈Bt|Col′ c(v) =ℓ} ∩Bt= {v∈Bt|Colc(v) =ℓ}∩Bt̸=∅. It follows from the construction of (Col′ c, S′, W′,P′)that Col′ c(v) =Col(v)for all v∈Bt. Also, since {v∈Bt|Col′ c(v) =ℓ} ∩Bt̸=∅, the vertex u was not the only vertex colored with ℓ. Therefore, S′is the same as S. This gives us that P′andPare of the same type in t. That is, (Col′ c, S′)t= (Col, S )tand we have stored a tuple for this type. It remains to show that P′ tis an important partition of its type in t. This is indeed the case as PandP′have the same type in candP′is an important partition of this type in c. Since the value of the two partitions does not change in tand they remain of the same type, we have that P′ tis an important partition of its type in t. Case 2: u∈Cℓfor some ℓ >tw+ 1.In this case we have that Cℓ∩Bt=∅and Cℓ∩Bc={u}. Notice that, at least one of the Cis,i∈[tw+ 1], must be empty. Indeed, since Ci∩Bc=Ci∩Bt̸=∅, for all i∈[tw+ 1], we have tw + 2sets intersecting Bc (including Cℓ). This is a contradiction as these sets must be disjoint and |Bc| ≤tw+ 1. First, we need to modify the partition Pso that it respects the second item of the assumptions we have made for the C-partitions in c. To do so, select any k∈[tw+ 1] such that Ck=∅and set Ck=Ci. Then, set Ck=Ck+1, for all k∈[p−1]\[i−1], and remove Cp. LetPc={Cc,1, . . . , C c,p−1}be the resulting C-partition of c. We define Colc:Bc→[tw+ 1]such that, for all v∈Bc, we have that Colc(v) =iif and only if v∈Cc,i. Notice that Colis the restriction of Colcon the vertex set Bt. Also, we define Scto be the table of size tw + 1such that Sc[i] =|Cc,i|for all i∈tw+ 1. Notice that for alli∈[tw+ 1]\ {k}, we have S[i] =Sc[i]andSc[k]̸= 0andS[k] = 0. Observe that Pis of type (Col, S )tandPcis of type (Colc, Sc)c. Therefore, let 11 (Colc, Sc, W′,P′)be the tuple we have stored in c, where P′is an important partition of type (Colc, Sc)c. At some point while constructing the tuples of t, we consider the tuple (Colc, Sc, W′,P′)and create a tuple
https://arxiv.org/abs/2505.22384v1
(Colt, St, W′,Pt)fort. We claim that Ptis of the same type as Pand that Ptis an important partition of that type. Notice that uis the only vertex of Bcsuch that Colc(u) =k. It follows that (Colt, St, W′,Pt)was created by setting: •Coltto be the restriction of Colcon the set Bt, •St[k] = 0andSt[i] =Sc[i], fori∈[tw+ 1]\ {k}and •we modify the Pcfollowing the steps described by the algorithm. Notice that P′andPtare the same C-partition, presented in a different way. By the construction of Col′ tandS′ t, we have that (Colt, St)tis the same as (Col, S )t. It follows that there exists a tuple (Col, S, W′,P′)stored in t, where P′is of type (Col, S )t. It remains to show that Ptis an important partition of its type. Notice that Pand Pcare the same C-partition. Therefore, they have the same value. The same holds for P′ andPt. Finally, since P′andPchave the same type in candP′is an important partition, we have that val (P′)≥val(Pc). So, val (Pt) =val(P′)≥val(Pc) =val(P), from which follows that Ptis also an important partition. Join Nodes. Lettbe a join node, with c1andc2being its children nodes. We will use the tuples we have computed for c1andc2in order to build one important C-partition for each type of C-partition that exists for t. For any pair of tuples (Col 1, S1, W1,P1)and (Col 2, S2, W2,P2), ofc1andc2respectively, we will create a tuple (Col, S, W, P)fortif: •Col 1(u) =Col 2(u)for all u∈Bt(which is the same as Bc1andBc2), •for all i∈[tw+ 1], we have S1[i] +S2[i]− |Ci∩Bt| ≤ C, where Ciis the ithset of P1. Note that the choice of P1here is arbitrary because of the first condition. Indeed, the first condition guarantees that P1andP2“agree” on the vertices of Bt. That is, the vertices of Btare partitioned in the same sets according to P1 andP2. The second condition guarantees that the sets created for Pare of size at most C. The tuple (Col, S, W, P)is created as follows. We set: •Col(u) =Col 1(u)for all u∈Bt, •S[i] =S1[i] +S2[i]− |Ci∩Bt|for all i∈[tw+ 1], and •W=W1+W2−P uv∈E(G[Bt]),Col(u)=Col(v)w(uv). Once more, Ciis chosen w.l.o.g. to be the ithset of P1. We are now ready to define P. Let P1={C1 1, . . . , C1 p}andP2={C2 1, . . . , C2 p′}; we create the C-partition P= {C1. . . , C p+p′−tw−1}as follows. For any i∈[tw+ 1], set Ci=C1 i∪C2 i. For any i∈ [p]\[tw+ 1], setCi=C1 i. Last, for any i∈[p′]\[tw+ 1], setCp+i=C2 i. This completes the construction of the tuple we keep for t, for each pair of tuples that are stored for c1 andc2. 12 We will now prove that for any type of C-partition of t, if there exists a C-partition of that type, we have created an important C-partition of that type. We assume that for a type (Col, S )toft, there exists an important C-partition P={C1, . . . , C p}ofG[B↓ t]. Let P1={C1∩B↓ c1, . . . , C p∩B↓ c1}andP2={C1∩B↓ c2, . . . , C p∩B↓ c2}. Notice that P1and P2areC-partitions of G[B↓ c1]andG[B↓ c2],
https://arxiv.org/abs/2505.22384v1
respectively. Let (Col, S 1)c1and(Col, S 2)c2be the types of P1andP2, respectively (recall that, by construction, Col 1=Col 2=Col). The existence of P1(respectively P2) guarantees that there is a tuple (Col, S 1, W1,P′ 1) (resp. (Col, S 2, W2,P′ 2)) stored for the node c1(resp. c2). By the definition of P1and P2, we have that S[i] =S1[i] +S2[i]− |Ci∩Bt| ≤ Cfor all i∈[tw+ 1]. It follows that while constructing the tuples of t, at some point the algorithm considered the pair of tuples (Col, S 1, W1,P′ 1)and(Col, S 2, W2,P′ 2), and created the tuple (Col, S′, W′,P′)for t. Notice that, by the construction of S′, we have that S[i] =S′[i]for all i∈[tw+ 1]. Therefore the type (Col, S′)tis the same as (Col, S )t. It remains to show that P′is an important partition of its type. Notice that val (P′) = W′=W1+W2−P uv∈E(G[Bt]),Col(u)=Col(v)w(uv)and val(P) =val(P1) +val(P2)−P uv∈E(G[Bt]),Col(u)=Col(v)w(uv). Since W1is the weight of an important partition of the same type as P1inc1, we have that val (P1)≤W1. Also, W2is the weight of an important partition of the same type as P2inc2. It follows that val (P2)≤W2. Overall: val (P′) = W1+W2−P uv∈E(G[Bt]),Col(u)=Col(v)w(uv)≥ val(P1) +val(P2)−P uv∈E(G[Bt]),Col(u)=Col(v)w(uv) =val(P). Thus, P′is an important partition of its type in t. This finishes the description of our algorithm, as well as the proof of its correctness. It remains to compute the running time of our algorithm. First we calculate the number of different types of C-partitions for a node t. We have at most (tw+ 1)tw+1 different functions Coland(C+ 1)tw+1different tables S. Therefore, we have (twC)O(tw) different types for each node. Since we are storing one tuple per type, we are storing (twC)O(tw)tuples for each node of the tree decomposition. Moreover, we create just one tuple for each leaf node. Also, the tuples of the child of each introduce and forget node are considered once. Therefore, we can compute all tuples for these nodes in time (twC)O(tw). As for the join nodes, in the worst case we may need to consider all pairs of tuples of their children that share the same coloring function. This still does not result in more than (twC)O(tw)combinations. Finally, as all the other calculations remain polynomial to the number of vertices, the total time that is required is (twC)O(tw)|V(G)|O(1). 3.2. Intractability for tree-depth In this section, we establish that C-Coalition Formation isW[1]-hard when pa- rameterized by the tree-depth of the input graph, complementing our algorithm from Theorem 1.1. To this end, we provide a rather involved reduction from General Fac- tors, defined below. General Factors Input: A graph Hand a list function L:V(H)→ P ({0, . . . , ∆(H)})that specifies the available degrees for each vertex u∈V. Question : Does there exists a set S⊆E(H)such that dH−S(u)∈L(u)for all u∈V(H)? 13 (α) the gadget Fx,z−2yV1(Fx,z−2y) V2(Fx,z−2y) . . . . . .. . . (β) the gadget Fx,y,zV1(Fx,y,z) V2(Fx,y,z) V3(Fx,y,z)Figure 3: The gadgets used in the proof of Theorem 1.2 Proposition 3.1 ([39]).General Factors isW[1]-hard even on bipartite graphs when parameterized by the size of the smallest bipartition. To ease
https://arxiv.org/abs/2505.22384v1
the exposition, we will first present the construction of our reduction and a high level idea of the proof that will follow before we proceed with our proof. We will then showthat any optimal C-partition of theconstructed graph verifiesa set ofimportant properties that will be utilized in the reduction. The construction. Let(H, L)be an instance of the General Factors problem where H= (VL, VR, E)is a bipartite graph ( V(H) = VL∪VRandE(H) = E) and L:VL∪VR→ P([|V(H)|])gives the list of degrees for each vertex. Notice that, normally, |L(u)| ≤d(u)≤ |V(H)|. Nevertheless, we can assume that |L(u)|=|V(H)|as we can allow L(u)to be a multiset. Hereafter, we assume that the size for the smallest bipartition ismand the total number of vertices is n=|V(H)|. Note that m≤n/2. We can also assume that m > 2as otherwise we could answer whether (H, L)is a yes-instance of the General Factors problem in polynomial time. Starting from (H, L), we will construct a graph Gsuch that any C-partition of G, for C= 100 n3, has a value exceeding a threshold if and only if (H, L)is a yes-instance of the General Factors problem. We start by carefully setting values so that our reduction works. We define the values A=n2,B= 5n2+ 3m+ 4andD= 2m+ 5which will be useful for the constructions and calculations that follow. We now describe the two different gadgets denoted by Fx,5A−2yandFx,C−y,z. The Fx,5A−2ygadget is defined for 4xy < 5A−2y. It is constructed as follows (illustrated in Figure 3(a)): •We create two independent sets UandVof size xand5A−2yrespectively, •we add all edges between vertices of UandVand 14 •we add 2xyedges between vertices of Vsuch that the graph induced by the vertices incident to these edges is an induced matching (we have enough vertices because we assumed that 4xy < 5A−2y). Hereafter, for any gadget Fx,5A−2y=Fwe will refer to UasV1(F)and to VasV2(F). The construction of Fx,C−y,zis as follows (illustrated in Figure 3(b)): •We create three independent sets U,VandWof size x,C −yandzrespectively, •we add all edges between vertices of UandVand all edges between vertices of V andW, Hereafter, for any gadget Fx,C−y,z=Fwe will refer to UasV1(F), toVasV2(F)and toWasV3(F). Before we continue, notice that |E(Fx,5A−2y)|= 5xAand|E(Fx,C−y,z)|= (x+z)(C −y). We are now ready to describe the construction of the graph G, illustrated in Figure 4. First, for each vertex v∈V(H), we create a copy Fvof the F4,C−B,2m+10gadget; we say that this is a vertex-gadget . We also fix a set U(Fv)⊂V1(Fv)such that |U(Fv))|= 2. Now, for any vertex v∈V(H)and integer α∈L(v), we create a copy Fα(v)of the Fm+6,5A−2αgadget; wesaythatthisisa list-gadget . Weaddalltheedgesbetween V2(Fα(v)) andU(Fv). Recall that we have assumed |L(v)|=|V(H)|, for all v∈V(H). So, for each vertex vofH, in addition to Fv, we have created |V(H)|=ngadgets (one for each element in the list). Finally for each edge e=uv∈E(H), where u∈VLandv∈VR, we create a copy Feof the F1,C−D,2mgadget; we say that this is an edge-gadget . Then, we add a set of vertices Ve={we L1, we L2, we R1, we R2}. We add all the edges between V1(Fu) andVe, all the edges between V1(Fu)and{we L1, we L2}, all the edges between V1(Fv)and
https://arxiv.org/abs/2505.22384v1
{we R1, we R2}and the edges we Li, we Rjfor all i, j∈[2](i.e.,Veinduces a K2,2). Hereafter, let VE=S e∈E(H)Veand by UE=S e∈E(H)V(Fe). This completes the construction of G. High-level idea. The reduction works for a carefully chosen value for C. Also, each gadget that is added has the number of its vertices carefully tweaked through changing the values of the x, yandz. We proceed by showing that in any optimal C-partition of G, for every gadget, its vertices belong in the same set of the partition. Moreover, each gadget Fvwill be in the same set as exactly one of the Fas. Then, we are left with the vertices of Vuv, which will serve as translators between the two problems. Intuitively, if an edge uvdoes not appear in the set Sof the solution of the General Factors problem, then any optimal C-partition PofGwill be such that the vertices of Vuvwill be in the same set as the vertices of Fuv. In particular, every C-partition PofGthat has a value exceeding a threshold, will be such that the vertices of Vuvwill be split to the different sets that contain the vertices of FvandFuif and only if the edge uvbelongs in the solution Sof the General Factors problem. Properties of optimal C-partitions of G.Before we continue let us introduce some notation. Observethatallthevertex-gadgetscontainthesamenumberofedges. Forevery vertex v∈V(H), let mv=|E(F)|, where Fis any vertex-gadget. Similarly, all edge- gadgets contain the same number of edges. For every edge e∈E(H), letme=|E(F)|, where Fis any edge-gadget. Finally, the same holds for the list-gadgets; let mℓ=|E(F)|, where Fis any list-gadget. 15 V1(Fv) FvV1(Fu) Fuwvu L1 wvu L2wvu R1 wvu R2 Fu V1(Fvu) FvuV2(Fα1(v)) Fα1(v) V2(Fαn(v)) Fαn(v)V2(Fα1(u)) Fα1(u) V2(Fαn(u)) Fαn(u). . .. . .Figure 4: The graph Gconstructed in the proof of Theorem 1.2. Our goal is to show that an optimal C-partition PofGhas value v(P) =mv|V(H)|+ mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|if and only if (H, L)is a yes-instance of theGeneral Factors problem. Assume that Pis an optimal C-partition of G. First, we will show that for every gadget F, there exists a C∈ Psuch that V(F)⊆C. Then we will prove that for any vertex-gadget Fv, there exists one list-gadget Fthat represents an element of the list L(v)(i.e.anyu∈U(Fv)andw∈V2(F)are adjacent) and there exists a C∈ Psuch that V(Fv)∪V(F)⊆C. Finally, we will show that in order for Pto be optimal, i.e., v(P) =mv|V(H)|+mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|, the vertices of VE will be partitioned such that: •the set that includes V(Fe)either includes all the vertices of Veor none of them and •the set that includes V(Fv)and a gadget V(F), for a list-gadget Frepresenting the value α∈L(v), will also include 2αvertices from VE. We will show that if both the above conditions hold then Pis optimal and (H, L)is a yes-instance of the General Factors problem. In particular, the edges E′of the solution of the General Factors problem are exactly the edges e∈E(H)such that FeandVeare in the same set of P. LetPbe aC-partition of G. For every C∈ P, we can assume that G[C]is connected as otherwise we could consider each connected component of G[C]separately. We start with the following lemma:
https://arxiv.org/abs/2505.22384v1
Lemma 3.2. LetP={C1, . . . , C p}be an optimal C-partition of GandFbe a vertex or edge-gadget. There exists a set C∈ Psuch that C⊇V(F). 16 Proof.Assume that this is not true and let Fbe a vertex or edge-gadget such that C∩V(F)̸=V(F)for all C∈ P. We first show that max C∈P{|C∩V2(F)|}=x≥ 2|V2(F)|/3. Assume that max C∈P{|C∩V2(F)|}=x < 2|V2(F)|/3. We consider the partition P′={V(F), C1\V(F), . . . , C p\V(F)}. We will show that v(P)< v(P′). Notice that any edge that is not incident to a vertex of V(F)is either in both sets or in none of them. Therefore, we need to consider only edges incident to at least on vertex ofV(F). Also, since all edges in E(F)are included in P′we only need to consider edges incident to V1(F)(as any other vertex is incident to edges in E(F)). For any vertex v∈V1(F), letd=d(v)− |V2(F)|(dis the same for any v∈V1(F)) andC∈ Pbe the set such that v∈ C. We have that |C∩N(v)| ≤d+x, from which follows that |E(P)\E(P′)| ≤4d. Notice that, regardless of which gadget Fand vertex v∈V1(F)we consider, we have that d≤5nA+ 2n <6nA(since A=n2). Indeed, if Fis an edge-gadget then d= 4. Also, if Fis a vertex-gadget, then any v∈V1(F)has at most 5Aneighboring vertices in each of the nlist-gadgets related to it (if it is in U(F)) and at most 2ninVE. We will now calculate |E(P′)\E(P)|. Consider a v∈V1(F)∪V3(F)and let C∈ P suchthat v∈C. Noticethat |C∩V2(F)|=x. Therefore, wehaveatleast |V2(F)|−xedges incident to v, which belong in E(P′)\E(P). Since V1(F)∪V3(F)is an independent set, it follows that |E(P′)\E(P)| ≥ |V1(F)∪V3(F)|(|V2(F)|−x)>(2m+4)|V2(F)|/3>|V2(F)|. Now, in order to show that v(P′)> v(P), it suffices to show that 4d <|V2(F)|. This is indeed the case as |V2(F)|=C −B= 100 n3−(5n2+3m+4)>24n3= 24nA > 4d. Thus, we can assume that max C∈P{|C∩V2(F)|}=x≥2|V2(F)|/3. LetC∈ Pbe the set such that |C∩V2(F)| ≥2|V2(F)|/3. We will show that C∩ V3(F) =V3(F). Assume that this is not true and let v∈V3(F)such that v /∈C. Notice that at most y=|V2(F)| −x≤ |V2(F)|/3edges incident to vare included in E(P). If |C|<C, then moving vfrom its set to Cincreases the number of edges in E(P)by x−y≥ |V2(F)|/3. Therefore, we can assume that |C|=C. Since G[C]is connected and|C|=C, we have that Cmust include at least one vertex from V1(F)and at least one vertex from N(V1(F))\V2(F). Notice that any vertex u∈N(V1(F))\V2(F)has degree at most m+ 10(regardless of the value of m). Therefore, by replacing a vertex u∈C∩(N(V1(F))\V2(F))inCbyv, we increase the number of edges in E(P)by at least x−y−d(u)≥ |V2(F)|/3−d(u)>0. This contradicts the optimality of P. Thus, we can assume that C∩V3(F) =V3(F). We will show that C∩V2(F) =V2(F). Assume that there exists a vertex v∈V2(F)\C. Since C∩V3(F) =V3(F), we have that |N(v)∩C| ≥ |V3(F)|= 2m+14andN(v)\C≤4. If|C|<C, then moving vfrom its set to Cincreases the number of edges in E(P)(recall thatm > 2). Thus we can assume that |C|=C. Since G[C]is connected and |C|=C, we have that Cmust include at least one vertex in V1(F)and one from N(V1(F))\V2(F). Notice that any vertex u∈N(V1(F))\V2(F)has degree at most m+ 10. Therefore, by replacing a vertex u∈C∩(N(V1(F))\V2(F))inCbyv, we increase the number of edges inE(P)by
https://arxiv.org/abs/2505.22384v1
at least 2m+ 10−(m+ 10) = m. This contradicts the optimality of P. Thus, we can assume that C∩V2(F) =V2(F). We now show that C∩V1(F) =V1(F). Assume that this is not true and let v∈V1(F) such that v /∈C. Notice that we may have up to d(v)− |V2(F)|edge in E(P)that are incident to v. If|C|<C, then moving vfrom its set to Cincreases the number of edges in 17 E(P)byatleast 2|V2(F)|−d(v)>0(since V2(F)⊂Cand|V2(F)|=C−B). Thus, wecan assumethat |C|=C. Since G[C]isconnectedand |C|=C, wehavethat Cmustincludeat leastonevertexin V1(F)andonefrom N(V1(F))\V2(F). Anyvertex u∈N(V1(F))\V2(F) can contribute at most m+ 10edges in E(P). Therefore, by replacing uinCbyv, we increase the number of edges in E(P)by at least 2|V2(F)| −d(v)−(m+ 10) >0. This contradicts the optimality of P, finishing the proof of the lemma. Next, we will show that the same holds for the list-gadgets. In order to do so, we first need the two following intermediary lemmas. Lemma 3.3. LetP={C1. . . , C p}be an optimal C-partition of GandFbe a list-gadget inG. There exists a set C∈ Psuch that |C∩V2(F)| ≥3|V2(F)|/4. Proof.Assume that this is not true and let max C∈P{|C∩V2(F)|}=x < 3|V2(F)|/4. We create a new partition P′={V(F), C1\V(F), . . . , C p\V(F)}. We will show that v(P)< v(P′). Notice that any edge that is not incident to a vertex of V2(F)is either in bothPandP′or in neither of them. Therefore, we need to consider only the edges that are incident to a vertex of V2(F). Observe that any edge in G[V(F)]is included in E(P′). Thus, E(P)\E(P′)⊆E(G[V2(F)∪U(Fv)]\E(G[V2(F)])(recall that N[V2(F)]∩V1(Fv) = U(Fv)andN[V2(F)]\V1(Fv)⊆V(F)). Since max C∈P{|C∩V2(F)|}=x <3|V2(F)|/4, we have at most 3|V2(F)|/2edges of E(G[V2(F)∪V1(Fv)]\E(G[V2(F)])inE(P)\E(P′). Thus, E(P)\E(P′)≤3|V2(F)|/2. We will now calculate the size of E(P′)\E(P). Since max C∈P{|C∩V2(F)|}=x < 3|V2(F)|/4, for each vertex v∈V1(F)there are at least |V2(F)|/4edges incident to vthat are included in E(P′)\E(P). Therefore, |E(P′)\E(P)| ≥ |V1(F)||V2(F)|/4. Since |V1(F)|=m+6>6we have that v(P)< v(P′), which contradicts to the optimality of P. Lemma 3.4. LetP={C1. . . , C p}be an optimal C-partition of GandFa list-gadget in G. There exists a set C∈ Psuch that |C∩V2(F)| ≥3|V2(F)|/4andV1(F)⊆C. Proof.By Lemma 3.3, we have that there exists a C∈ Psuch that |C∩V2(F)| ≥ 3|V2(F)|/4. Assume that there exists a v∈V1(F)\C. We can assume that |C|=C, as otherwise we could move vintoCwhich would result in a C-partition with higher value. Since |C|=CandG[C]is connected, we know that Cincludes vertices from V1(Fv), where Fvis a vertex-gadget in G. Also, by Lemma 3.2, we know that C⊇V(Fv). Since |C|=CandG[C]is connected, we also have a vertex u∈C∩N[V1(Fv)]\(V2(Fv)∪V2(F)). Notice that d(u)≤m+10. We claim that replacing ubyvinCwill result in a C-partition with higher value. Indeed, since d(u)≤m+ 10, removing ufrom Creduces the value of the partition by at most m+ 10. Moreover, vhas3|V2(F)|/4neighbors in C. Therefore, moving vintoCincreases the value of Pby at least |V2(F)|/2. Since |V2(F)|/2> m+10, this is a contradiction to the optimality of P. Thus V1(F)⊆C. We are now ready to show that the vertices of any list-gadget will belong to the same set in any optimal C-partition of G. Lemma 3.5. LetP={C1, . . . , C p}be an optimal C-partition of GandFbe a list-gadget inG. There
https://arxiv.org/abs/2505.22384v1
exists a set C∈ Psuch that V(F)⊆C. 18 Proof.By Lemma 3.4 we have that there exists a C∈ Psuch that |C∩V2(F)| ≥ 3|V2(F)|/4andV1(F)⊆C. Assume that there exists a vertex u∈C\V2(F). We can assume that |C|=Cas otherwise we could include uintoCand this would result in a C-partition with a higher value (as most of the neighbors of uare in C). Since |C|=C andG[C]is connected, we know that Cincludes vertices from V1(Fv), where Fvis a vertex-gadget in G. Also, by Lemma 3.2, we know that C⊇V(Fv). Since C⊇V(Fv), we can conclude that there is no other list-gadget F′inGsuch that V1(F′)∩C̸=∅. Indeed, since V1(F′)∩C̸=∅and by Lemma 3.4, we have that |C∩V2(F′)| ≥3|V2(F′)|/4and, thus, that |C|>C. Since |C|=CandG[C]is connected, we need to include vertices from N(V1(Fv))\(V2(Fv)∪V2(F))inC. Also, since we have concluded that there is no list-gadget F′inGsuch that V1(F′)∩C̸=∅, any vertex w∈C such that w∈N(V1(Fv))\(V2(Fv)∪V2(F))has|N(w)∩C| ≤6. We claim that replacing uinCbyvwill result in a C-partition with a higher value. Indeed, since |N(u)∩C| ≤6, removing ufrom Cwill reduce the value of the partition by at most 4. Also, since vhas at least d(v)−1of its neighbors in C, moving it into Cincreases the value of the partition by at least d(v)−1≥m+ 6 + 2 −1>6. This is a contradiction to the optimality of P. Thus, V1(F)⊆C. As we already mentioned, it follows from Lemmas 3.2 and 3.5 that for any optimal C-partition PofG, any set C∈ Pthat includes a vertex-gadget Fvcan also include at most one list-gadget F. We will show that any such set Cmust, actually, include exactly one list-gadget. Lemma 3.6. LetP={C1. . . , C p}be an optimal C-partition of GandFva vertex-gadget inG. Let C∈ Pbe the set such that V(Fv)⊆C. There exists a list-gadget Fsuch that N(V1(Fv))∩V(F)̸=∅andV(F)⊆C. Proof.By Lemma 3.2 we have that for any vertex-gadget Fv, there exists a set C∈ P such that V(Fv)⊆C. We will show that Calso includes a list-gadget Fand that N(V1(Fv))∩V(F)̸=∅. Assume that this is not true, and let Fbe any list-gadget such thatN(V1(Fv))∩V(F)̸=∅. We can assume that |C| ≥ C −| V(F)|as otherwise we could include V(F)inCand create a C-partition of higher value than P. By the size of Fv, the assumption that |C| ≥ C − | V(F)|, and Lemma 3.2, we have that C\V(Fv)⊆VE. Let S=V(F)∪V(Fv)andP′={S, C 1\S, . . . , C p\S}. We claim that v(P)< v(P′). We will calculate the values |E(P)\E(P′)|and|E(P′)\E(P)|. Notice that the only edges that may belong in E(P)\E(P′)are the edges between V1(Fv)andVE. This means that |E(P)\E(P′)| ≤8n(since there are less than nedges incident to vinHand8edges between V(Fv)andVe, for any eincident to v). As for |E(P′)\E(P)|, since the edges between |U(Fv)|and|V2(F)|do not contribute to P, we have that |E(P′)\E(P)| ≥ |U(Fv)| · |V2(F)|. Since |V2(F)|>5A−2n > 3n(for any list-gadget, and sufficiently large n) and |V1(Fv)|= 4, we can conclude that |V1(Fv)| · |V2(F)|>8n. Therefore, |E(P)\E(P′)|<|E(P′)\E(P)|, which contradicts the optimality of P. Finally, we will show that any vertex u∈Vemust be in a set that includes either vertices from V(Fe)or vertices from V(Fu)∪V(Fv), where e=uv. Formally: 19 Lemma 3.7. LetP={C1. . . , C p}be an optimal C-partition of G,w∈Ve,
https://arxiv.org/abs/2505.22384v1
for some e=uv∈E(H), and w∈Cfor some C∈ P. IfV(Fu)∩C=∅andV(Fv)∩C=∅then V(Fe)∪ {w} ⊆C. Proof.It follows by Lemma 3.2 that there exists a C′∈ Psuch that V(Fe)⊆C′⊆ V(Fe)∪Ve. Indeed, assuming otherwise, C′would include vertices from a vertex-gadget. In this case we would have that |C′|>C, a contradiction. Assume that V(Fu)∩C=∅ andV(Fv)∩C=∅. If C′̸=Cthen wcontributes 0edges to the value of Psince N(w)∩C=∅. Now since C′⊆V(Fe)∪Ve, and|V(Fe)|=C −4, we know that we always can move wtoC′and increase the value of the partition. Therefore, C′=C. Next, we will calculate the absolute maximum value of any C-partition of G. Notice that in any optimal C-partition, we have two kind of sets: those that include vertices of vertex or list-gadgets and those that include vertices from edge-gadgets. We separate the sets of any optimal C-partition of Gbased on that. In particular, for an optimal C-partition P, we define PVandPEas follows. We set PE⊆ Psuch that C∈ P Eif and only if there exists an edge-gadget Fesuch that V(Fe)⊆C. Then we set PV=P \ P E. It is straightforward to see that the previous lemmas also hold for optimal C-partitions P′ofG[V(PE)]andP′′ofG[V(PV)]. Indeed, assuming otherwise, we could create a C- partition for Gof higher value since Pis the concatenation of P′andP′′. Notice now that for any vertex in V(G)\VE, we know whether it belongs in V(PV) or in V(PE). However, this is not true for the vertices of VE. We will assume that V(PV) includes xvertices from VEand we will use this in order to provide an upper bound to the value of |E(PV)|and|E(PE)|. Let us now consider an optimal partition P, let S=V(PE)∩VE,x=|S|and y=|VE\S|. We start with the upper bound of |E(PV)|. Let Fvbe a vertex-gadget. Recall that, there are nlist-gadgets adjacent to Fvand, by Lemma 3.6, exactly one of them is in the same set as Fvin any optimal C-partition. Let Fbe a list-gadget that is not in the same set as FvinP(and thus in PV). By Lemma 3.5, we know that all vertices of Fare in the same set of PV. Thus, for each one of them, we have mℓedges in E(PV). Since there are n−1such list-gadgets for each one of the nvertex-gadgets, in total we have n(n−1)mℓ edges that do not belong in the same set as a vertex-gadget. Now, let Fbe the list-gadget such that the vertices of V(F)andV(Fv)are in the same setC∈ P V. Let αvbe the value represented by the list-gadget F. Since |V(Fv)∪V(F)|= C −2αv, at most 2αvof the yvertices of VEcan be in C. Let|C∩VE|=yv≤2αv≤y. Since these vertices must be incident to V1(Fv), we have that |E(G[C])|=mv+mℓ+ 4yv+2(5 A−2αv) =mv+mℓ+10A+4yv−4αv; the 2(5A−2αv)term comes from the fact that exactly 2vertices of V1(Fv)are adjacent to all the vertices of V2(F). By counting all sets that include vertices from vertex-gadgets we have that mvn+mℓn+ 10A+ 4X v∈V(H)yv−4X v∈V(H)αv In total: |E(PV)|=mvn+mℓn2+ 10nA+ 4X v∈V(H)yv−4X v∈V(H)αv, 20 where n=|V(H)|. Now, we will calculate an upper bound of |E(PE)|and we will give some properties that must be satisfied in order to achieve this maximum. Let S=VE∩V(PE). By Lemma 3.7, we have that PEconsists of the vertex sets
https://arxiv.org/abs/2505.22384v1
of the connecting components of G[V(PE)]. Thus, in order compute an upper bound of |E(PE)|, it is suffices to find an upper bound of the number of edges in G[S′∪S e∈E(H)V(Fe)], for any set S′⊆VEwhere |S′|=|S|. For any G[S′∪S e∈E(H)V(Fe)], where S′⊆VE, we define types of its connected compo- nentsbasedonthesizeoftheirintersectionwith VE. Inparticular, let X={C1. . . , C p}be the vertex sets of the connected component of G[S∪S e∈E(H)V(Fe)]. For any set C∈ X, we have |C∩VE|=i, where i∈ {0,1,2,3,4}. We set Xi={C∈ X | | C∩VE|=i}. Notice thatP4 i=0|Xi|=|E(H)|. We claim that, in order to maximize the number of edges in G[S∪S e∈E(H)V(Fe)], we would like to have as many sets in X4as possible. Formally, we have the following lemma. Lemma 3.8. For any 0≤x≤ |VE|, let Sbe a subset of VEsuch that |S|=xand |E(G[UE∪S])|= max S′⊆VE,|S′|=x|E(G[UE∪S′])|. Assume that X={C1. . . , C p}are the vertex sets of the connected components of G[UE∪S]. We have that |X1|+|X2|+|X3|= 0 ifx mod 4 = 0and|X1|+|X2|+|X3|= 1otherwise. Proof.First, we will prove that |X1|+|X2|+|X3| ≤1. Assume that |X1|+|X2|+|X3|>1. We will show that there exists a set S′such that |S′|=xand|E(G[UE∪S])|<|E(G[UE∪ S′])|. Let C1andC2be two sets in Xsuch that C1, C2/∈ X 0∪ X 4. Let C1∈ X ℓ1and C2∈ X ℓ2. We distinguish two cases: ℓ1+ℓ2≤4andℓ1+ℓ2>4. Case 1. ℓ1+ℓ2≤4.LetFe1andFe2be the edge-gadgets such that V(Fe1)⊆C1and V(Fe2)⊆C2. We modify the sets C1andC2as follows: •We replace C1with C1′=C1\Ve1and •we replace C2with C2′=C2\Ve2∪Ywhere Y⊇ {we L1, we R1}and|Y|=ℓ1+ℓ2. LetS′=Y∪(S\(Ve1∪Ve2))and let us denote the resulting partition by X′. Notice that ℓ1+ℓ2≥2. So we can always have Y⊇ {we L1, we R1}and|Y|=ℓ1+ℓ2. Also, |V(X)∩VE|=|V(X′)∩VE|=x. It remains to show that |E(X′)|>|E(X)|. It suffices to show that |E(G[C1′])|+|E(G[C2′])|>|E(G[C1])|+|E(G[C2])|. To achieve that, we again distinguish three sub-cases: ℓ1+ℓ2= 2,ℓ1+ℓ2= 3, orℓ1+ℓ2= 4. Case 1.a. ℓ1+ℓ2= 2.Since ℓ1≥1andℓ2≥1, we have that ℓ1=ℓ2= 1. Thus, by the construction of G,|E(G[C1])|=|E(G[C2])|=|E(G[Fe1])|+ 1(as all edge-gadgets have the same number of edges). Also, since ℓ1+ℓ2= 2, we get that Y={we L1, we R1}. This, by the construction of G, gives us that |E(G[C1′])|=|E(G[Fe1])|and|E(G[C2′])|= |E(G[Fe2])|+ 3. Therefore, |E(G[C1′])|+|E(G[C2′])|>|E(G[C1])|+|E(G[C2])|. Case 1.b. ℓ1+ℓ2= 3.Since ℓ1≥1andℓ2≥1we have that either ℓ1= 2andℓ2= 1 orℓ1= 1andℓ2= 2. Assume, w.l.o.g., that ℓ1= 2andℓ2= 1. By the construction of G we have that |E(G[C1])| ≤ |E(G[Fe1])|+ 3and|E(G[C2])| ≤ |E(G[Fe2])|+ 1. Also, since ℓ1+ℓ2= 3andY⊇ {we L1, we R1}, wehavethat |E(G[C1′])|=|E(G[Fe1])|and|E(G[C2′])|= |E(G[Fe2])|+ 5. Therefore, |E(G[C1′])|+|E(G[C2′])|>|E(G[C1])|+|E(G[C2])|. 21 Case 1.c. ℓ1+ℓ2= 4.Since ℓ1≥1andℓ2≥1, we have that either ℓ1=ℓ2= 2or one of the ℓ1andℓ2is1and the other 3. In the first case, |E(G[C1])|=|E(G[C2])| ≤ |E(G[Fe1])|+ 3while in the second, |E(G[C1])|=|E(G[Fe1])|+ 1and|E(G[C1])|= |E(G[Fe1])|+ 5. In both cases, |E(G[C1])|+|E(G[C2])| ≤6. Also, since ℓ1+ℓ2= 4 andY=Ve2, we have that |E(G[C1′])|=|E(G[Fe1])|and|E(G[C2′])|=|E(G[Fe2])|+ 8. Therefore, |E(G[C1′])|+|E(G[C2′])|>|E(G[C1])|+|E(G[C2])|. Case 2. ℓ1+ℓ2>4.LetFe1andFe2be the edge-gadgets such that V(Fe1)⊆C1and V(Fe2)⊆C2. We modify the sets C1andC2as follows: •We replace C1with C1′=C1∪Ve1and •we replace C2with C2′=C2\Ve2∪Ywhere Y⊆ {we L1, we R1}and|Y|=ℓ1+ℓ2−4. Indeed, it suffices to have Y⊆ {we L1, we R1}as2≤ℓ1, ℓ2≤3, and thus, ℓ1+ℓ2−4<3. We need to consider two cases, either ℓ1+ℓ2= 5orℓ1+ℓ2= 6 Case 2.a. ℓ1+ℓ2= 5.In this case, we have that one of ℓ1, ℓ2is equal to 2while the other is equal
https://arxiv.org/abs/2505.22384v1
to 3. W.l.o.g. let ℓ1= 2. By the construction of G, we get that |E(G[C1])| ≤ |E(G[Fe1])|+ 3(as all edge-gadgets have the same number of edges). Also, |E(G[C1])| ≤ | E(G[Fe1])|+ 5. Now observe that |E(G[C1′])|=|E(G[Fe1])|+ 8and |E(G[C1′])|=|E(G[Fe1])|+ 1. Therefore, |E(G[C1′])|+|E(G[C2′])|>|E(G[C1])|+ |E(G[C2])|. Case 2.b. ℓ1+ℓ2= 6.In this case, we have that one of ℓ1=ℓ2= 3. By the construction of Gwe obtain that |E(G[C1])|=|E(G[C2])|=|E(G[Fe1])|+ 5(as all edge- gadgets have the same number of edges). We also have that |E(G[C1′])|=|E(G[Fe1])|+ 8and|E(G[C1′])|=|E(G[Fe1])|+ 3(since Y={we L1, we R1}in this case). Therefore, |E(G[C1′])|+|E(G[C2′])|>|E(G[C1])|+|E(G[C2])|. To sum up, we have that |X1|+|X2|+|X3| ≤1. We will now show that |X1|+|X2|+ |X3|= 0ifx mod 4 = 0and|X1|+|X2|+|X3|= 1otherwise. Assume that x mod 4 = 0. Notice that 4|X4|+3|X3|+2|X2|+|X1|+0|X0|=x; therefore (4|X4|+ 3|X3|+ 2|X2|+|X1|+ 0|X0|)mod 4 = 0 = ⇒(3|X3|+ 2|X2|+|X1|)mod 4 = 0. This implies that |X1|+|X2|+|X3|= 0. Indeed, assuming otherwise we get that |X1|+|X2|+|X3|= 1, and thus (3|X3|+ 2|X2|+|X1|)mod 4 =i, for an i∈[3]. This is a contradiction to (3|X3|+ 2|X2|+|X1|)mod 4 = 0. Next, assume that x mod 4 = ifori∈[3]. Then we have that 4|X4|+ 3|X3|+ 2|X2|+|X1|+ 0|X0|=x=⇒(4|X4|+ 3|X3|+ 2|X2|+|X1|+ 0|X0|)mod 4 =i=⇒ (3|X3|+ 2|X2|+|X1|)mod 4 =i. If|X1|+|X2|+|X3|= 0then the previous implies that i= 0which is a contradiction. This finishes the proof of this lemma. It follows that the maximum value of max S′⊆VE,|S′|=x|E(G[UE∪S′])|is •me|E(H)|+ 8x/4, when x mod 4 = 0, or •me|E(H)|+ 8(x−i)/4 +xi, when x mod 4 =i, where x1= 1,x2= 3,x3= 5. Noticethat max S′⊆VE,|S′|=x|E(G[UE∪S′])| ≤me|E(H)|+2x where the equality holds only when x mod 4 = 0. Thus, we have the following: 22 Corollary 3.9. Given that x=|S|, the maximum value of Pis:v(P)≤mv|V(H)|+ mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|. Also, this can be achieved only when x mod 4 = 0andx=|VE| −y=P v∈H(v)2αv. We are finally ready to prove our main theorem. Theorem 1.2. TheC-Coalition Formation problem is W[1]-hard when parameterized by the tree-depth of the input graph. Proof.Let(H, L)be the input of the General Factors problem, let Gbe the graph constructed by (H, L)as described above, and let Pbe an optimal C-partition of G. We will prove that the following two statements are equivalent: •v(P) =mv|V(H)|+mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)| •(H, L)is a yes-instance of the General Factors problem. Assume that (H, L)is a yes-instance of the General Factors problem and let E′⊆E(H)be the edge set such that, for any vertex v∈V(H), we have dH−E′(v)∈L(v). We will create a C-partition of Gthat has value mv|V(H)|+mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|. For each edge e∈E(H)\E′we create a set Ce=V(Fe)and for each e∈E′we create a set Ce=V(Fe)∪Ve. For each v∈V(H), letFbe the list-gadget that represents the value dH−E′(v). The existence of such a list-gadget is guaranteed since dH−E′(v)∈L(v). Also, let Uvbe the subset of VEsuch that, for any u∈Uv, there exists an edge e∈E(H)\E′such that u∈Veanduis incident to the vertices of V1(Fv)(this means that eis incident to vinH). Notice that the vertices in Uvhave not been included in any set Ce, fore∈E(H), that we have created this far. Now, for each v∈V(H), we create a set Cv=V(Fv)∪V(F)∪Uv. It remains to deal with the list-gadgets that have not yet been included in any set. We
https://arxiv.org/abs/2505.22384v1
create the sets C1, . . . , C n(n−1), one for each one of them. We claim that P={Ce|e∈E(H)} ∪ { Cv|v∈V(H)} ∪ { C1, . . . , C n(n−1)}is a C-partition of Gandv(P) =mv|V(H)|+mℓ|V(H)|2+me|E(H)|+10A|V(H)|+8|E(H)|. Notice that any of the sets C∈ {Ce|e∈E(H)} ∪ { C1, . . . , C n(n−1)}have size at most Cas they are either vertex sets of a list-gadget or a subset of V(Fe)∪Ve, for some e∈E(H). Thus we only need to show that |Cv| ≤ Cfor all v∈V(H). We have that |V(Fv)∪F| ⊆Cvwhere Fis the list-gadget that represents the value dH−E′(v)∈L(v). Therefore, |V(Fv)∪F|=C −2dH−E′(v). We claim that |Uv|= 2dH−E′(v). Recall that Uv contains the vertices of VEfor which there exists an edge e∈E(H)\E′such that u∈Ve anduis incident to the vertices of V1(Fv). Actually, there are exactly dH−E′(v)edges incident to vfrom E(H)\E′. Also, for each such edge e, two vertices of Veare incident to V1(Fv)(the vertices we L1, we L2ifv∈VLandwe R1, we R2ifv∈VR). Thus |Uv|= 2dH−E′(v) and|Cv|=C. It remains to argue that v(P) =mv|V(H)|+mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|. First, notice that the vertex set V(F)of any gadget Fbelongs to one set. Thus, every edge of E(G[V(F)])contributes in the value of P. This give us mvedges for each vertex-gadget, meedges for each edge-gadget and mℓedges for each list-gadget. Since we have |V(H)|vertex-gadgets, |E(H)|edge-gadgets and |V(H)|2list-gadgets, this gives mv|V(H)|+mℓ|V(H)|2+me|E(H)|edges (up to this point). 23 We also need to compute the number of edges in E(P)that do not belong in any set E(G[V(F)]), for any gadget F. Let Sbe the set E(P)\T Fis any gadget E(G[V(F)]). Notice that for any C∈ P, we have S∩C̸=∅if and only if there exists a vertex or edge-gadget Fsuch that V(F)⊆C. First we consider a set Cthat includes a vertex-gadget. By construction, we have that Cincludestheverticesofavertex-gadget Fv, theverticesofalist-gadget Fthatrepresents an integer αvand2αvvertices from the set VE. There are exactly |Uv| · |V1(F)|edges between FvandF. Also, for any vertex u∈C∩VE, we have that N(u)∩C=V1(Fv). Thus, we have |Uv| · |V1(F)|+|C∩VE| · |V1(Fv)|= 2(5 A−2α) + 2 α·4 = 10 A+ 4αv edges. Also, by construction, C∩VEis an independent set. Thus we have no other edges to count. This gives us 10nA+ 4P v∈V(H)αv. Now we consider a set Cthat includes an edge-gadget. By the construction of Cwe have that there exists an edge e∈E(H)such that either C=V(Fe)orC=V(Fe)∪Ve. Therefore, if e∈E′then C=V(Fe)∪VeandE(G[C])includes 8edges incident to vertices ofVE, while if e /∈E′then C=V(Fe)andE(G[C])does not include edges incident to vertices of VE. This gives us 8|E′|extra edges. In order to complete the calculation of |S|we need to observe that the values αv, v∈V(H)and|E′|are related. In particular, by the selection of αv, we have thatP v∈V(H)dH−E′(v) =P v∈V(H)αv= 2|E(H)\E′|. Itfollowsthat: |S|= 10nA+4P v∈V(H)αv+ 8|E′|= 10nA+ 8|E(H)\E′|+ 8|E′|= 10nA+ 8|E(H)|. In total, |E(P)|=mv|V(H)|+mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|. For the reverse direction, assume that we have a C-partition PofGsuch that v(P) = mv|V(H)|+mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|. By the calculated upper bounds, we have that •|E(PE)|=me|E(H)|+ 2xand •|E(PV)|=mvn+mℓn2+ 10nA+ 4P v∈V(H)yv−4P v∈V(H)αv. Also, in order to achieve mv|V(H)|+mℓ|V(H)|2+me|E(H)|+ 10A|V(H)|+ 8|E(H)|, we have thatP v∈V(H)αv= (|VE|−x)/2. Recall that in order to achieve the maximum value, for
https://arxiv.org/abs/2505.22384v1
any edge e∈E(H), either Ve⊂ P VorVe⊂ P E. Let E′={e∈E(H)|Ve⊂ P E}. We claim that for any v∈V(H), we have dH−E′(v)∈L(v). Let V(Fv)⊆C, for some C∈ P,Fbe the list-gadget such that F⊆Cand|C∩VE|=xv. By Corollary 3.9 we obtain that 2αv=xvwhere avis the value represented by F, if the partition is of optimal value. Observe that, for any edge e∈E(H)\E′incident to v, two vertices of Veare in C∩VE. Thus, 2dH−E′(v) =xv. Since 2αv=xvandαv∈L(v)(by the construction of list-gadgets) we obtain that dH−E′(v) =αv∈L(v). Thus (H, L)is a yes-instance of the General Factors problem. The tree-depth of Gis bounded. The only thing that remains to be shown is that the tree-depth of Gis bounded by a computable function of m. Recall that mis the size of one of the bipartitions of H. W.l.o.g., assume that |VL|=m. We start by deleting the setV1(Fv), for all v∈VL. This means that we have deleted 4mvertices. Now, we will calculate an upper bound of the tree-depth of the remaining graph. In the new graph, there are connected components that include vertices from vertex-gadgets Fv, forv∈VR, 24 but no connected component includes two such gadgets. For each such a component, we delete the vertices V1(Fv), for each v∈VR. Since these deletion are in different components, they are increasing the upper bound of the tree-depth of the original graph by4. Also, after these deletions, any connected component that remains is: •either a list-gadget F, •or isomorphic to G[Ve∪V(Fe)](for any e∈E(H)), •or isomorphic to G[V2(Fv)∪V3(Fv)](for any e∈E(H)). We claim that in any of these cases, the tree-depth of this connected component is at mostO(m). Consider a list-gadget F. Any G[V(F)]had tree-depth at most m+ 1. This holds because if we remove V1(F)we remain with a set of independent vertices plus a matching. Consider a connected component isomorphic to G[Ve∪V(Fe)]. Observe that G[Ve∪V(Fe)]has tree-depth 2m+ 5because removing V1(Fe)∪V3(Fe)∪Veresults to an independent set and |V1(Fe)∪V3(Fe)∪Ve|= 2m+ 5for all e∈E(H). Finally, consider a connected component isomorphic to G[V2(Fv)∪V3(Fv)]. In this case the tree-depth of this component is upperly bounded by 2msince deleting V3(Fv)results to an independent set. In total the tree-depth of Gis upperly bounded by 3m+ 9. This completes the proof. 3.3. Graphs of bounded vertex cover number Next, we consider a slightly relaxed parameter, the vertex cover number of the input graph. We begin with establishing that Weighted C-Coalition Formation admits FPT algorithm by the vertex cover number of the input graph. It contrasts Theorems 1.1 and 1.2 since we can remove the dependence on Cin the FPT algorithm in the following theorem, even for the weighted case. Theorem 1.3. TheWeighted C-Coalition Formation problem can be solved in time vcO(vc)nO(1), wherevcdenotes the vertex cover number of the input graph. Proof.LetUbe a vertex cover of Gof size vc and let Ibe the independent set V\U. If such a vertex cover is not provided as input, we can compute one in time 2vcnO(1) time [20]. First, observe that there can be at most vc many coalitions in Gwhich can have a positive contribution (since the contribution comes from edges and each edge in Gis incident
https://arxiv.org/abs/2505.22384v1
to some vertex in U). Next, we guess P′={C1, . . . , C p}(here, p≤vc), the intersection of the sets of an optimal C-partition of Gwith U; letW=v(P′). Notice that we can enumerate all vcO(vc)partitions of Uin vcO(vc)time. Next, for each P′we do the following (in nO(1)time). We create a new graph G′as follows. First, we create the vertex sets Si, where |Si|=C −Cifor each i∈[p]. Then, we add all the edges between the vertices of x∈IandSiifv∈N(Ci), for every i∈[p]. Formally, Si={ui 1, . . . , ui Si}fori∈[p],V(G′) =S i∈[p]Si, and E(G′) ={ui jx|x∈ N(Ci)∩I, i∈[p], j∈[Si]}. Finally, for every edge xy, with x∈Siandy∈I, we set the weight w(xy) =P u∈Ci∧uy∈Ew(uy), i.e., to be equal to how much Wwould increase ifywas added to Ci. Now, observe that in order to compute an optimal C-partition of G 25 whose intersection with UisP′, it suffices to find a maximum weighted matching of G′, which can be done in polynomial time [28]. Since we do this operation for each possible intersection of the C-partition with U, all of which can be enumerated in vcO(vc)time, we can compute an optimal C-partition of Gin time vcO(vc)nO(1). In the rest of this subsection, we establish that our algorithm from Theorem 1.3, which enumerates all possible intersections of a smallest size vertex cover with an optimal solu- tion, is asymptotically optimal when assuming the ETH. Notably, we design a tight lower bound result to match both Theorems 1.1 and 1.3 by establishing that it is highly unlikely thatC-Coalition Formation admits an algorithm with running time (Cvc)o(vc+C)nO(1). This is achieved through a reduction from a restricted version of the 3-SATproblem. R3-SAT Input: A3-SATformula ϕdefined on a set of variables Xand a set of clauses C. Additionally, each variable appears at most four times in Cand the variable set Xis partitioned into X1∪X2∪X3, such that every clause includes at most one variable from each one of the sets X1,X2andX3. Question : Does there exist a truth assignment to the variables of Xthat satisfies ϕ? First, we establish that R3-SATis unlikely to admit a 2o(n+m)time algorithm. Lemma 3.10. TheR3-SATproblem is NP-hard. Also, under the ETH, there is no algorithm that solves this problem in time 2o(n+m), where n= max {|X1,|X2|,|X3|}andm is the number of clauses. Proof.The reduction is from 3-SAT. First we make sure that each variable appears at most four times. Assume that variable xappears k >3times. We create knew variables x1, . . . , x kand replace the i-th appearance of xwith xi. Finally, we add the clauses (x1∨¬x2)∧(x2∨¬x3). . .(xk∨¬x1). This procedure is repeated until there is no variable that appear more than 3times. Next, we create an instance where the variables are partitioned in the wanted way. First, we fix the order that the variables appear in each clause. Let xbe any variable that appears in the formula. If xappears only in the i-th position of every clause it is part of (for some i∈[3]), then we add xintoXi. Otherwise, we create three new variables x1, x2, x3and, for each clause c∈C, ifxappears in the
https://arxiv.org/abs/2505.22384v1
i-th position of c, we replace it with xi. Notice that, at the moment, xiappears at most twice for each i∈[3]. We add xi in the set Xi, for all i∈[3]. Also, we add the clauses (x1∨¬x2)∧(x2∨¬x3)∧(x3∨¬x1). Thus, in any satisfying assignment of the formula, the variables x1,x2andx3have the same assignment. Notice that in each one of the original clauses, the i-th literal contains a variable from Xi. Therefore, each one of the original clauses have at most one variable from Xifor each i∈[3]. This is also true for all the clauses that were added during the construction. It is easy to see that the constructed formula is satisfiable if and only if ϕis also satisfiable. 26 Vi,jv. . . . . .2N /uniontext x∈Ui,jUxUu Iv,u 3 lognIu 30Nlogn Uw Iv,w Iw . . .Figure 5: The gadget Gi,jused in the construction of Theorem 1.4. Finally, notice that the number of variables and clauses that were added is linear in regards to n+m. Therefore, we cannot have an algorithm that runs in 2o(n+m)and decides whether the new instance is satisfiable unless the ETH is false. Once more, we begin by describing the construction of our reduction, we continue with a high-level idea of the reduction, prove the set of properties that should be verified by any optimal C-partition of the constructed graph and finish with the reduction. The construction. Let(X, C)be an instance of the R3-SATproblem, and let X= X1∪X2∪X3be the partition of Xas it is defined above. We may assume that |X1|= |X2|=|X3|=n= 2kfor some k∈N. If this is not the case, we can add enough dummy variables that are not used anywhere just to make sure that this holds. We can also assume that kis an even number; if not, we can double the variables to achieve that. Notice that the number of additional dummy variables is at most 2max{|X1|,|X2|,|X3|}, so that the number of variables still remains linear in regards to max{|X1,|X2|,|X3|}+m. We start by partitioning each variable set Xiin to k= log nsetsXi,1. . . , X i,k, with |Xi,j| ≤ ⌈n/logn⌉for every j∈[k]. For each set Xi,j, we construct a variable gadget Gi,j (illustrated in Figure 5) as follows: •First, we create a vertex set Vi,jwith 2N= 2⌈n/log2n⌉vertices. Each vertex in Vi,j represents at mostlogn 2variables. To see that we have enough vertices to achieve this, observe that Xi,jrepresents a set of ⌈n/logn⌉variables. Thus:& ⌈n logn⌉ 2⌈n log2n⌉' = l n 2 logn⌈n log2n⌉m ≤l n 2 lognn log2nm =l logn 2m =logn 2where the last equality holds because lognis assumed to be an even number. Hereafter, let X(v)be the variable set that is represented by v. Also notice that X(v)⊆Xi,j⊆Xifor all v∈Vi,j. 27 •Then we create the set of assignment vertices Ui,j={uℓ|ℓ∈[√n]}. Now, for each vertex v∈Vi,jand each assignment over the variable set X(v), we want to have a vertex of Ui,jrepresent this assignment. Since |X(v)| ≤logn 2, there are at most 2logn 2different assignments over the variable set X(v). Therefore, we can select the variables of Ui,jto represent the assignments over X(v)in such a way
https://arxiv.org/abs/2505.22384v1
that each assignment is represented by at least one vertex and no vertex represents more than one assignment. Notice that Ui,jcontains enough vertices to achieve this since |Ui,j|=√n. We are doing the same for all vertices in Vi,j. •We proceed by creating four copies u1, . . . , u4of each vertex u∈Ui,j. For each assignment vertex u, letUube the set {u, u1, . . . , u4}. For each set Uu, we add an independent set Iuof size 30Nlogn. Then, for each vertex v∈Iuwe add all the edges between vand the vertices of Uu. •Finally, for each pair (v, u)∈Vi,j×Ui,j, we create an independent set Iv,uof3 logn vertices and, for all w∈Iv,uandx∈Uu∪ {v}, we add the edge wx. This concludes the construction of Gi,j, which corresponds to the set Xi,j. LetVCbe the set of clause vertices , which contains a vertex vcfor each c∈C. We add the vertices of VCto the graph we are constructing. The edges incident to the vertices of VCare added as follows. Letc∈C,lbe a literal that appears in c,xbe the variable that appears in landv the variable vertex such that x∈X(v). We first add the edge vcv. Now, consider the (i, j)∈[3]×[logn]such that v∈Vi,j. For each vertex uinUi,jwe add the edge uvcif and only if lbecomes true by the assignment over X(v)represented by u. LetGbe the resulting graph. Finally, set C= 42 Nlognto be the capacity of the coalitions, where, recall, N=⌈n/log2n⌉. This finishes our construction. High-level description. We establish that, due to the structural characteristics of G and the specific value chosen for C, any optimal C-partition PofGhas a very particular structure. In particular, we know that any set SofPcontains exactly one assignment vertex u. Moreover, if Scontains a clause vertex vc(representing the clause c) and the v(P)is greater than some threshold, then it also contains a vertex vfrom a variable set, such that the assignment over X(v)corresponding to usatisfies c. Properties of optimal C-partitions of G.First, we identify the structural properties of any optimal C-partition of Gthat are going to be used in the reduction. In particular we will show that for any optimal C-partition {C1, . . . , C p}ofG, we have that: •for any k∈[p], if{u} ⊆Ckfor some assignment vertex u, then Uu⊆Ck; •for any k∈[p], it holds that |Ck∩S (i,j)∈[3]×[logn]Ui,j| ≤1; •for any (i, j)∈[3]×[logn]andv∈Vi,j, ifv∈Cthen C∩Ui,j̸=∅. Lemma 3.11. LetP={C1, . . . , C p}be an optimal C-partition of G. Let u∈Ui,jfor some (i, j)∈[3]×[logn]andu∈Ckfor some k∈[p]. Then, Uu∩Ck=Uu. 28 Proof.Assume that, for some (i, j)∈[3]×[logn], there exists a set Uufor an assignment vertex u∈Ui,jsuch that u∈Ck, for some k∈[p], and Uu∩Ck̸=Uu. We will show that, in this case, Pis not an optimal C-partition of G. Indeed, consider the following C-partition of G. First set C=Uu∪N(Uu)\VC. Then, let P′={C, C 1\C, . . . , C p\C}. Notice that P′is aC-partition. Indeed, |C|= 5 + 30 Nlogn+ 2N3 logn≤42Nlogn=C and|Ci\C| ≤ |Ci| ≤ CasCi∈ Pfor all i∈[p]. We will now show that v(P′)> v(P). First observe that for every v∈Cwe have that: •v∈Uu, or •v∈Vuwhere Vu={v|N(v) =Uu}, or •v∈V′ uwhere
https://arxiv.org/abs/2505.22384v1
V′ u={v|N(v) =Uu∪ {v}for some v∈Vi,j}. By construction, we know that |Vu|= 30Nlognand|V′ u|= 2·3Nlogn. We now consider P. Observe that the vertices of Uuare assigned to different compo- nents of P. Thus, we have that: •at most 4·30Nlogn= 120 Nlognof the edges incident to vertices of Vuare included theE(P), and •at most 5·2N·3 logn= 30Nlognof the edges incident to vertices of V′ uare included inE(P). Also, since |N(u)∩VC| ≤4Nlognand|N(ui)∩VC|= 0, for all i∈[4], we have that E(P)contains at most 4Nlognedges between UuandVC. Therefore, by removing Cfor allCi,i∈[p],wehavereducedthevalueof Pbyatmost (120+30+4) Nlogn= 154 Nlogn. Let us now count the number of edges in G[C]. Since Uu∪Vu⊆C, we have that E(G[C])includes all the 150Nlognedges between vertices of UuandVu. Also, we have that E(G[C])contains5 6of the edges incident to V′ u. Indeed, N(V′ u)∩C=Uu. This gives us another 5·2N3 logn= 30 Nlognedges. Furthermore, no other edge appears inG[C]. Thus, E(G[C])contains (150 + 30) Nlogn= 180 Nlognedges. Therefore, we have that v(P′)≥26Nlogn+v(P). This is a contradiction to the optimality of P, as v(P′)> v(P). Lemma 3.12. LetP={C1, . . . , C p}be an optimal C-partition of G. For any k∈p, there is no pair (u, u′)of vertices such that: •u∈Ui,jfor some (i, j)∈[3]×[logn], •u′∈Ui′,j′for some (i′, j′)∈[3]×[logn](it is not necessary that (i, j)̸= (i′, j′)) and •{u, u′} ⊆Ck. Proof.Assume that this is not true and let k∈[p]be an index for which such a pair (u, u′)exists in Ck. By the optimality of Pand Lemma 3.11, we have that Uu∪Uu′⊆Ck. By construction, we have that |Iu|=|Iu′|= 30Nlogn. Since uandu′belong in the same CkandC= 42Nlogn, we know that there are at least (2·30−42)Nlogn= 18Nlogn vertices from the sets IuandIu′that do not belong in Ck. Notice that these vertices do 29 not contribute at all to the value of Pas they are not in the same partition as any of their neighbors. Consider the sets C1=Uu∪Iu,C2=Uu′∪Iu′andC=C1∪C2. We create theC-partition P′={C1, C2, C1\C, . . . , C p\C}. Notice that P′is indeed a C-partition as|C1|=|C2|= 5 + 30 Nlogn≤42Nlognand|Ci\C| ≤ |Ci| ≤42NlognasCi∈ Pfor alli∈[p]. We will show that v(P)< v(P′). First, we will deal with the edges incident to vertices of IuandIu′. Notice that C1 andC2include all the edges between UuandIuas well as the edges between Uu′and Iu′. Therefore, |E(P′)\E(P)| ≥5(2·30−42)Nlogn= 90Nlogn. Indeed, each vertex ofIu∪Iu′has exactly five neighbors in the set C1∪C2and at least 18Nlognedges do not contribute any value to P. Now we consider the edges incident to vertices in W=N(Uu∪Uu′)\(Iu∪Iu′). Observe that, in the worst case, all the edges between verticesof WandUu∪Uu′areincludedin E(P)whilenoneofthemisincludedin P′. Also, any edge that is not incident to Uu∪Uu′is either included in both E(P)andE(P′)or in none of them. Notice that any vertex in Uu(respectively in Uu′) has 2N·3 lognneighbors inV(Gi,j)\Iu(resp. in V(Gi′,j′)\Iu′). Furthermore, u(resp. u′) has at most 4Nlogn neighbors in VC. Also, there are no other neighbors of these vertices to be considered. Therefore, in the worst case, |E(P)\E(P′)|= 2N·3 logn+ 8Nlogn= 68Nlogn. Since 90Nlogn >68Nlognwe have that v(P′)> v(P), contradicting the optimality of P. Lemma 3.13. LetP={C1, . . . , C p}be an optimal C-partition of G.
https://arxiv.org/abs/2505.22384v1
For any (i, j)∈ [3]×[logn]andu∈Ui,j, ifu∈Cthen any v∈N(u)∩V(Gi,j)also belongs in C. Proof.Assume that for an (i, j)∈[3]×[logn]there exists a u∈Ui,jand a w∈(N(u)∩ V(Gi,j))such that u∈Ckandw /∈Ck. We will show that Pis not optimal. It follows from Lemma 3.11 that Uu⊆Ck. We will distinguish the following two cases: either |Ck|<Cor not. Case 1: |Ck|<C. In this case, either w∈Iuorw∈Iv,ufor some v∈Vi,j. Since w has at most one neighbor that does not belong in Ck, moving wto the partition of Ck will create a C-partition that includes more edges than P. This is a contradiction to the optimality of P. Case 2: |Ck|=C. In this case, it is safe to assume that G[Ck]is connected as otherwise we can partition it into its connected components. This does not change the value of the partition, and the resulting set that contains uhas a size less than C. We proceed by considering two sub-cases, either Ck∩VC̸=∅or not. Case 2.a: Ck∩VC̸=∅.We claim that either there exists a vertex c∈Ck∩VCsuch that u /∈N(c)orG[Ck]has a leaf xsuch that u /∈N(x). Indeed, in the second case, |Ck∩VC|(by construction) and the existence of wis guaranteed by the fact that no other assignment vertex can be in Ck. In the first case we set y=cwhile in the second y=x. We create a new partition as follows: •we remove yfrom Ck, •move wfrom its set to Ckand •add a new set C={c}in the partition. 30 LetP′be this new C-partition. We have that v(P′)> v(P). Indeed, whas at most one neighbor that does not belong in Ck. Therefore, moving wtoCkincreases the number of edges by at least 4(wis adjacent to all vertices of UuandUu⊆Ck). We consider the case where yis a vertex c∈Ck∩VCsuch that u /∈N(c). Since uis the only assignment vertex inCk, and there are at most 3edges connecting cto variable vertices, removing cfrom Ckreduces the value of Pby at most 3. Therefore, v(P′)−v(P) =|E(P′)|−|E(P)| ≥1. This is a contradiction to the optimality of P. Similarly, in the case where ya leaf such that u /∈N(y), removing yfrom Ckreduces the value of Pby at most 1. This again contradicts the optimality of P. Case 2.b: Ck∩VC=∅.Since G[Ck]is connected, |N(u)∪Vi,j|<CandCk∩VC=∅, there exists a pair (v, x)∈Vi,j×Ui,jsuch that x̸=uandCk∩Iv,x̸=∅. Also, by Lemmas 3.12 and 3.11, we have that Ux∩Ck=∅. Therefore, any vertex y∈Ck∩Iv,x contributes at most one edge in E(P). We create a new partition as follows: •select a vertex y∈Ck∩Iv,xand remove it from Ck, •move wfor its set to Ckand •add a new set C={y}in the partition. This is a contradiction to the optimality of P, since the removal of yfrom Ckreduces the value of the partition by at most 1, while moving wtoCkincreases the value by at least 4. Summing up the previous lemmas, we can observe that in any optimal C-partition P ofG, there is one component for each vertex u∈S (i,j)∈[3]×[logn]Ui,jand if u∈C, for some C∈ P, then N(u)\VC⊆C. Lemma 3.14. LetP={C1, . . . , C p}be an optimal C-partition of G. For any (i, j)∈ [3]×[logn]andv∈Vi,j, ifv∈Ck, for some k∈[p], then |Ck∩Ui,j|= 1. Proof.Recall that by Lemma 3.12, |Ck∩Ui,j|is either
https://arxiv.org/abs/2505.22384v1
1or0. Assume that there exist (i, j)∈[3]×[logn]andv∈Vi,jsuch that v∈Ckand|Ck∩Ui,j|= 0. By this assumption and Lemma 3.13, we can conclude that N(v)∩Ck⊆VC. Also, since each variable has at most 4appearances and vrepresents at motslogn 2variables, we have that |N(v)∩Ck| ≤ 2 logn. Letu∈Ui,jbe an arbitrary assignment vertex. Also, let Cℓ̸=Ckbe the set of Psuch that u∈Cℓ. By Lemma 3.13, we know that Cℓ∩Iv,u=Iv,u. Now we distinguish two cases: either |Cℓ|<Cor|Cℓ|=C. Case 1: |Cℓ|<C.We create a new partition as follows: •remove vforCkand •addvforCℓ. LetP′be the new partition; notice that this is a C-partition as |Cℓ∪ {v}|<C+ 1. Also, the removal of vfrom Ckreduces the value of the partition by at most 2 lognwhile the 31 addition of vtoCkincreases the value by 3 logn. This is a contradiction to the optimality ofP. Case 2: |Cℓ|=C.Similarly to the proof of Lemma 3.13, we assume that G[Cℓ]is connected. Also, since any set Iv′,x, for (v′, x)∈Vi,j×Ui,jis a subset of the set of the partition that includes x, we have that Cℓ∩VC̸=∅. Indeed, assuming otherwise we get that either |Cℓ|<CorG[Cℓ]is not connected. We create a new partition as follows: •select (arbitrarily) a vertex c∈Cℓ∩VCand remove it from Cℓ, •move vfrom CktoCℓand •add a new set C={c}in the partition. We will show that the value of the new partition is greater than the original. First, notice that chas at most four neighbors in Cℓ, asCℓcan include only one assignment vertex, andvhas at most 2 lognneighbors in Ck, asN(v)∩Ck⊆VC). Therefore, removing c from Cℓandvfrom Ckreduces the value of the partition by at most 2 logn+ 4. Also, since u∈Cℓand by Lemma 3.13, we get that Iv,u⊆Cℓ. Thus, moving vintoCℓincreases the value of the partition by |Iv,u|= 3 log n >2 logn+ 4. This is a contradiction to the optimality of P. Next, we compute the minimum and maximum values that any optimal C-partition of Gcan admit. Lemma 3.15. LetP={C1, . . . , C p}be an optimal C-partition of G. We have that 3Nlog2n(180√n+ 6)≤v(P)≤3Nlog2n(180√n+ 6) + 2 m, where m=|VC|. Further- more, if a vertex c∈VCbelongs to C∈ Pand|N(c)∩C|= 2, then N(c)∩C={v, u}, where v∈Vi,jandu∈Ui,j, for some (i, j)∈[3]×[logn]. Proof.First, we calculate the number of edges that E(P)includes from any Gi,j. Notice that W=Vi,j∪Ui,jis a vertex cover of Gi,jand no edge is incident to two vertices of this set. Therefore, we can compute |E(P)∩E(Gi,j)|by counting the edges of Pthat are incident to a vertex of W. First, for any vertex u∈Ui,j, ifu∈C, for a C∈ P, we have that N(u)∩VC⊆N(u)∩C. Also, we know that Uu⊆C. Therefore, all the edges that are incident to vertices in Uuare in E(P). So, for each u∈Ui,jwe have 5(30 + 2 ·3)Nlogn= 180 Nlognedges in E(P)that are incident to vertices in Uu. Also, it follows by Lemma 3.14 that for any vertex v∈Vi,j, there exists a (unique) u∈Ui,j such that {v, u} ⊆ Cfor some C∈ P. Furthermore, by Lemma 3.13, we have that N(v)∩C⊆Iv,u. Thus, for each v∈Vi,j, the set E(P)includes 3 lognedges and no other edge (from E(Gi,j)) is incident to it. Since we have not counted any edge more than once,
https://arxiv.org/abs/2505.22384v1
we have that |E(P)∩E(Gi,j)|= (180√n+ 6)Nlognfor any (i, j)∈[3]×[logn]. Therefore, we have that |E(P)∩S (i,j)∈[3]×[logn]E(Gi,j)|= 3Nlog2n(180√n+ 6). Since there are no edges between V(Gi,j)andV(Gi′,j′)for(i, j)̸= (i′, j′), it remains to count the edges incident to vertices of VC. For any (i, j)∈[3]×[logn]and any c∈VC, we have that |N(c)∩Vi,j| ≤1as the clause represented by chas at most one variable from the vertex set Xiand the vertices of any Vi,jrepresent variables from Xi. Assume that c∈C, forC∈ P. IfC∩Ui,j=∅for all (i, j)∈[3]×[logn], then chas no neighbors in C. 32 Indeed, by Lemma 3.14 we have that any variable vertex appears in the same set as one assignment vertex. Now, assume that Cincludes a u∈Ui,jfor some (i, j)∈[3]×[logn]. By Lemma 3.12, there is no other assignment vertex in C. Also, by Lemma 3.14, only variable vertices from Vi,jcan be in C. Therefore, chas at most 2neighbors in C(one variable vertex and one assignment vertex). Since the sets of edges are disjoint, we have at most 2extra edges per clause vertex c∈VC. This concludes the proof of this lemma. We are now ready to prove our result. Theorem 1.4. Unless the ETH fails, C-Coalition Formation does not admit an al- gorithm running in time (Cvc)o(vc+C)nO(1), wherevcdenotes the vertex cover number of the input graph. Proof.Letϕbe the formula that is given in the input of the R3-SATproblem, and let G be the graph constructed from ϕas described before. We will show that ϕis satisfiable if and only if Ghas aC-partition of value 3Nlog2n(180√n+ 6) + 2 m, where C= 42Nlogn andN=⌈n/log2n⌉. Assume that ϕis satisfiable and let α:X→ {true, false }be a satisfying assignment. We will construct a C-partition of Gof the wanted value. First, for each assignment vertex u, create a set Cu=Uu∪(N(u)\VC). We then extend these sets as follows. Consider a variable vertex vand restrict the assignment αon the vertex set X(v). By construction, there exists an assignment vertex uthat represents this restriction of α. Notice that there may exist more than one such vertices; in this case we select one of them arbitrarily. We add vinto the set Cuthat corresponds to u. We repeat the process for all variable vertices. Next, we consider the vertices in VC. Let c∈VCbe a vertex that represents a clause in ϕ. Since αis a satisfying assignment, there exists a literal in this clause that is set to true by α. Let xbe the variable of this literal. We find the set Cusuch that v∈Cuandx∈X(v). We add cinCu, and we repeat this for the rest of the vertices in VC. We claim that the partition P={Cu|uis an assignment vertex }is an optimal C- partition of G. We first show that this is indeed a C-partition. By construction, for any C∈ Pwe have a pair (i, j)∈[3]×[logn]and a vertex u∈Ui,jsuch that C⊆Vi,j∪N[Uu]∪ VC. Notice that |Vi,j∪N[Uu]|= 2N+ 2N·3 logn+ 30Nlogn+ 5. We now calculate |C∩VC|. By construction, if c∈C∩VC, there exists a vertex v∈Vi,jsuch that v∈C. Therefore, N(Vi,j)∩VC⊇C∩VC. Since each v∈Vi,jrepresentslogn 2variables and each variableappearsinatmost 4clauses, wehavethat |N(Vi,j)∩VC| ≤ |Vi,j|2 logn≤4Nlogn. Thus|C| ≤2N+ 2N3 logn+ 30Nlogn+ 4Nlogn+ 5<42Nlogn=Cfor sufficiently large
https://arxiv.org/abs/2505.22384v1
n. We now need to argue about the optimality of P. Using the same arguments as in Lemma 3.15, we can show that E(P)∩E(Gi,j)includes exactly 3Nlogn(180√n+ 6) edges. Thus, |E(P)∩S (i,j)∈[3]×[logn]E(Gi,j)|= 3Nlog2n(180√n+ 6). Therefore, we need to show that there are 2madditional edges in E(P)that are incident to vertices of VC. Notice that, for any c∈VC, there exists a Cusuch that c∈Cuand there exist vertices v, uinCuthat are both incident to c(which holds by the selection of Cu), with vbeing a variable vertex and uan assignment vertex. Finally, by construction, there are at most 2edges incident to cinE(P). Therefore, v(P) = 3 Nlog2n(180√n+ 6) + 2 m. 33 For the reverse direction, assume that we have a C-partition PofG, with v(P) = 3Nlog2n[180√n+ 6] + 2 m. By Lemma 3.15 we have that each vertex c∈VCmust be in a set C∈ Psuch that: •|N(c)∩C|= 2and •there exist (i, j)∈[3]×[logn]such that v∈Vi,j∩C,u∈Ui,j∩Cand{v, u} ⊆N(c). We construct an assignment αofϕthat corresponds to this partition as follows. For each variable x, consider the variable vertex vsuch that x∈X(v). By Lemma 3.14 there exists a unique assignment vertex usuch that vandubelong in the same component of P. Let σv,ube the assignment represented by uforX(v). We set α(x) =σv,u(x). Notice that each variable appears in the set of one variable vertex and for each such vertex we have selected a unique assignment (represented by the assignments vertex in its set). Therefore the assignment we create in this way it is indeed unique. We claim that αis a satisfying assignment. Consider a clause of ϕand assume that cis the corresponding clause vertex in VC. Assume that c∈Cfor some C∈ P. By Lemma 3.15 we have that |N(c)∩C|= 2and there exist (i, j)∈[3]×[logn]such that v∈Vi,j∩C,u∈Ui,j∩Cand{v, u} ⊆N(c). Since v∈N(c), we know that there exists a variable x∈X(v)that appears in a literal lof the clause represented by c. Observe that vis unique. Moreover, since u, v∈V(Gi,j), and u∈N(c), we have that σv,u(l) =α(l) satisfies the clause represented by c. This finishes the reduction. The last thing that remains to be done is to bound vc (G) =vc,i.e., the size of the vertex cover number of G, appropriately. Notice that the vertex set containing the Vi,js, theUi,js and the copies of the vertices in the Ui,js, for every (i, j)∈[3]×[logn]}, is a vertex cover of the graph. Therefore, vc ≤3 logn(2N+ 5√n)∈ O(n logn). Additionally, C ∈ O (n logn). To sum up, if we had an algorithm that computed an optimal solution of the C- Coalition Formation problem in time (Cvc)o(C+vc), we would also solve the R3- SATproblem in timen logno(n logn). This contradicts the ETH since n logno(n logn) = 2(logn−log log n)o(n logn)= 2o(n−nlog log n logn)= 2o(n). 4. Kernelization In this section, we consider the kernelization complexity of C-Coalition Formation andWeighted C-Coalition Formation parameterized by the vertex cover number of the input graph and establish that following contrasting result. While the weighted and unweighted versions of this problem have similar asymptotic behavior from a parameter- ized complexity point of view, for the parameters tw and vc, the
https://arxiv.org/abs/2505.22384v1
kernelization complexity exhibits a stark contrast between the two versions. This signifies that weights present a barrier from the kernelization complexity point of view. In particular, we establish that whileC-Coalition Formation parameterized by vc +Cadmits a O(vc2C)vertex ker- nel,Weighted C-Coalition Formation parametrized by vc +Ccannot admit any polynomial kernel unless the polynomial hierarchy collapses. 34 4.1. Polynomial Kernel for C-Coalition Formation In this section, we establish that C-Coalition Formation admits a polynomial kernel parameterized by vc +C. We will use an auxiliary bipartite graph Hthat we construct as follows. Let Ube a vertex cover in Gand let I=V(G)\U. Then, V(H) contains two partitions XandYsuch that V(Y) = Iand for each u∈U, we add t=vc× C+Cmany vertices u1, . . . , u t. Moreover, if uv∈E(G)such that u∈Uand v∈I, we add the edge uivinHfor each i∈[t]. Now, we compute a maximum matching MinH. Let Y′⊆Ybe the set of vertices that are not matched in M. We have the following reduction rule (RR). (RR):Delete an arbitrary vertex w∈Y′from I. Lemma 4.1. RR is safe. Proof.First, observe that in any C-partition PofG, at most vc × Cmany vertices can participate in sets C∈ Psuch that C∩U̸=∅and these are the only vertices of Ithat can contribute in the value of P. Now, let G′=G[V(G)\ {w}]. Since any C-partition P′ofG′can be easily extended to aC-partition of Gby adding to it a singleton set C={w}, it suffices to show that the value of the optimal partition of Gand value of the optimal partition of G′are equal. LetP={C1, . . . , C p}be an optimal C-partition of G. We claim that there exists a C-partition P∗such that {w} ∈ P∗andv(P∗) =v(P). Notice that, by proving that P∗ exists, we also prove that any optimal C-partition of G′has the same value as any optimal C-partition of G. Indeed, P∗\{w}is aC-partition of G′andv(P∗) =v(P∗\{w}) =v(P); thusP∗\ {w}is aC-partition of G′(otherwise, Pis not an optimal C-partition of G). It remains to prove that such a C-partition exists. In the case that {w}is a singleton in PthenP∗=P. Therefore, we assume that w participates in some set C̸={w}ofP. Let x∈U∩Csuch that xw∈E(G). Then, observe that x1, . . . , x tare matched to t=vc×C+Cmany vertices of Iby the maximum matching in H(asw /∈Y); let Sxbe the set of these vertices. Observe that at least Cof the vertices in Sxare not contributing to the value of P (since at most vc × Cmany vertices can participate in sets C∈ Psuch that C∩U̸=∅). We create a new set C′by moving C −1of these vertices (which contain vertices that are connected to x) into C′and move xfrom CtoC′. Observe that after this step, we have that v(P′)≥v(P)as we remove at most C −1edges incident on xinCand add exactly C −1edges incident on xinC′. We can keep repeating this step until wis no longer connected to any vertex in C, and at this point, we can delete wfrom C(since its contribution is 0) and add it as a singleton. Hence, we get the P∗. This completes our proof. It is straightforward to see that once
https://arxiv.org/abs/2505.22384v1
we cannot apply RR anymore, |V(G)|=O(vc2C). This, along with Lemma 4.1, imply the following theorem. Theorem 1.5. C-Coalition Formation admits a kernel with O(vc2C)vertices, where vcdenotes the vertex cover number of the input graph. 35 Vi . . .ui 1 vi logn. . .vi 1 . . . Vj uj 1. . . uj lognvj 1 . . . vj lognIi ui Ij ujvyz xFigure 6: The graph Gconstructed in the proof of Theorem 1.6. The vertex vyzrepresents the edge of Hwith endpoints the vertices yandzsuch that y∈Viandz∈Vj(for some i < j ∈[k]). The red edges joining the vertex vyzto the red vertices of ViandVjare according to the selection set of yand zrespectively. That is, if yis the kthvertex of Vi(according to the arbitrary enumeration that was chosen), then the first digit of the binary representation of kwould be a 1(there is an edge towards vi 1), the second a 0(there is an edge towards ui 2) and so on. The color blue is used to indicate which vertices would join the coalitions defined by the uis. 4.2. Incompressibility of Weighted C-Coalition Formation Unfortunately, the above approach only works in the unweighted case. In fact, in the rest of this subsection we show that the existence of a polynomial kernel parameterized by vc and Cis highly unlikely for the general weighted case. This is achieved through a reduction from the k-Multicolored Clique problem. k-Multicolored Clique Input: A graph H= (V, E)and a partition (V1, . . . , V k)ofVintokindependent sets Question : Does there exists a set S⊆Vsuch that G′[S]is a clique? We may additionally assume that |V1|=. . .=|Vk|=n= 2mfor some m∈Nas, otherwise, we can add independent vertices in each set. It is known that k-MCdoes not admit a kernel of size poly(k+ log n), unless the Polynomial Hierarchy collapses [43]. We 36 proceed by describing the construction of our reduction, provide a high-level idea of the proof, show the needed properties and finish by proving the theorem. The construction. Given a graphs Has an input for the k-Multicolored Clique problem, we construct an instance of C-Coalition Formation , forC=k 2 +klogn+1, asfollows(illustratedinFigure6). Foreachset Vi, wefirstcreateacliqueof 2 lognvertices Vi={ui j, vi j|j∈[logn]}. We proceed by creating a vertex uiand an independent set Ii of size C −logn−1. Finally, we add all edges between uiand vertices from Vi∪Ii. Before we continue, we will relate the vertices for each set Viwith a subset of vertices ofVi. Let vi, . . . , v nbe an enumeration of the vertices in Vi. We assign to each vj∈Via binary string of length lognsuch that the string assigned to vjrepresents the number j in binary form. Let s(v)be the string assigned to a vertex vof the original graph. Also, for each iandv∈Vi, we use s(v)in order to define a set S(v)⊆Vias follows; for each ℓ∈[logn], •if the ℓ-th letter of s(v)is0, we add ui ℓinS(v), •otherwise, we add vi ℓinS(v). We continue by creating one vertex vefor each edge e∈E. We call this set Ve. We add edges incident to veas follows: Let e=uvwhere u∈Viandv∈Vjfor
https://arxiv.org/abs/2505.22384v1
some i, j∈[k]; notice that i̸=j. We add all edges vew, where w∈S(u)∪S(v). Finally, we add a vertex xand we add all the edges between xand the vertices of Ve∪S i∈[k]Vi. We will call this new graph G. We complete the construction by defining the weight function w:E(G)→Nas follows. •For any edge e=uivwhere v∈Ii, we set w(e) = 4C3. •For any edge e=uivwhere v∈Vi, we set w(e) = 3C2. •For any edge e=xvwhere v∈Vi, we set w(e) = 2C. •For any other edge ewe set w(e) = 1. High-level description. The rest of the proof consists in showing that there is a clique of order kinHif and only if there exists a C-partition of Gwith v(P)above some threshold. Intuitively, the weights of the edges incident to the uis are chosen so that every set of Pcontains at most one of the uis. Also, if a set of Pcontains a ui, then it also contains lognvertices from Vi; the remaining lognvertices of Viare the encoding of a vertex v∈Vi. We then build an equivalence between locating the vertices in the Vis that can form a clique in Hand having a set in Pthat contains x,ℓ=k 2 “edge” vertices eu1,v1, . . . , e uℓ,vℓand the klognvertices contained in S(u1), . . . , S (uℓ)andS(v1), . . . , S (vℓ) that correspond to the encodings of the vertices u1, . . . , u ℓandv1, . . . , v ℓ. Properties of optimal C-partitions of G.Before we proceed with the reduction, we will prove some properties for any optimal partition of G. 37 Lemma 4.2. LetP={C1. . . , C p}be an optimal C-partition of G. For any i∈[k], there exists a j∈[p]such that ui∪Ii⊆Cj. Proof.Assume that there exists an i∈[k]such that Cj∩(ui∪Ii)̸= (ui∪Ii), for any j∈[p]. Let Cbe the set {ui}∪Ii. We claim that the partition P′={C1\C, . . . C p\C, C} has higher value than P. Indeed, by separating Cfrom the rest of the partition, we only lose the weights of the edges incident to uiand vertices of Vi(as all the other neighbors ofuiare in C). Since all these edges have weight 3C2, we are reducing the value of the partition by at most C3C2. On the other hand, notice that there exists at least one edge e=uivfor some v∈Iisuch that w(e)is counted in P′but not in P. Since 4C3>C3C2, we have that v(P′)> v(P), which contradicts the optimality of P. Lemma 4.3. LetP={C1. . . , C p}be an optimal C-partition of G. Ifui∈Cj, for some (i, j)∈[k]×[p], then |Cj∩Vi|= log n. Proof.Assume that there exists a ui∈Cj, for some (i, j)∈[k]×[p], such that |Cj∩Vi|< logn. Notice that |Cj∩Vi|is at most lognby Lemma 4.2. Select (arbitrarily) a set U such that Vi⊃U⊃Cj∩Viand|U|= log n. We set C=U∪Ii∪ {ui}. Them, we create the partition P′={C1\C, . . . , C p\C, C}. We claim that v(P′)> v(P). Indeed, there is at least one edge uiv∈E(P′)\E(P). Also, this edge has weight 3C2. Now, consider an edge in e∈E(P)\E(P′). It is not hard to see that w(e) = 1orw(e)
https://arxiv.org/abs/2505.22384v1
= 2C. Also, since any edge with weight 2Cis incident to x, we may have less than C −1such edges in E(P)\E(P′). Thus, the total weight of the edges in E(P)\E(P′)is less than C2C+C 2 <3C2. Therefore, v(P′)> v(P), which contradicts the optimality of P. Lemma 4.4. LetP={C1. . . , C p}be an optimal C-partition of Gandx∈Cℓ, for an ℓ∈[p]. Then |Cℓ∩Vi|= log nfor all i∈[k]. Proof.It follows from Lemmas 4.2 and 4.3 that, for each i∈[k], we have exactly logn vertices from Vithat are in the same set as {ui}∪Ii. Let Ci∈ Pbe the set that includes {ui}. Observe that |Ci|=C. Thus, no other vertex has been included to Ci. Therefore, for all i∈[k], there are exactly lognvertices that are not in the same set as ui; letSibe this subset of Vi. That is, Si=Vi\Ci. We will show that, for all i∈[k], we have that Si⊆Cℓ. Assume that there exists an isuch that Si⊈Cℓand let u∈Si\Cℓ. We consider two cases, either |Cℓ|<Cor not. Case 1 |Cℓ|<C:Then we create the following partition. •Remove ufrom its current set and •addutoCℓ. Since uwas not in the same set as uiorxinP, any edge e∈E(P)that is incident to u has weight 1. Also, since Pis aC-partition, we have at most C −1neighbors of uin the same set as uinP. Thus, moving uto a different set reduces the value of the partition by at most C −1. On the other hand, in E(P′)we have at least included the edge xuand w(xu) = 2C>C. This contradicts the optimality of P. Case 2 |Cℓ|=C:Then we have at least one edge vertex veinCℓ. We create a new C-partition by swapping uandve. Again, moving uto a different set reduces the value 38 of the partition by at most C −1. Also, recall that by construction, d(ve) = 2 log n+ 1 and all of theses edges are of weight 1. Therefore, moving veto a different set reduces the value of the partition by at most 2 logn+ 1. The fact that E(P′)includes at least the edge xuandw(xu) = 2C>C+ 2 log n+ 1, contradicting to the optimality of P. We are now ready to prove our result. Theorem 1.6. Weighted C-Coalition Formation parameterized by vc+Cdoes not admit a polynomial kernel, unless polynomial hierarchy collapses, where vcdenotes the vertex cover number of the input graph. Proof.LetHbe the input graph of the k-Multicolored Clique problem, and Gbe the graph constructed from Has described above. We will show that Hhas a clique of order kif and only if any optimal C-partition PforGhas value at least v(P) = k 4C3(C −logn−1) + (3 C2+ 2C) logn+ 2logn 2 +k 2 (2 log n+ 1). Assume that Hhas a clique of order kand let vibe the vertex of this clique that also belongsto Vi, foreach i∈[k]. Foreach i∈[k], wecreatetheset Ci={ui}∪Ii∪(Vi\S(vi)). Then we create a set C={x}∪S i∈[k]S(vi)∪{ve|e=vivjfor all 1≤i < j≤k}. Finally we add one set for each remaining vertex ve. Let P={C1, . . . , C p},p > k + 1, be the resulting partition. We claim that v(P) =k 4C3(C −logn−1) +
https://arxiv.org/abs/2505.22384v1
(3 C2+ 2C) logn+ 2logn 2 +k 2 (2 log n+ 1). Indeed, for any i∈[k], the sum of the weights of the edges of G[Ci]is exactly 4C3(C − logn−1) + 3 C2logn+logn 2 . Also, by construction, the sum of the weights of the edges of G[C]is exactly k2Clogn+klogn 2 +k 2 (2 log n+1). Finally, all the other sets are singletons. Thus v(P) =k 4C3(C−logn−1)+(3 C2+2C) logn+2logn 2 +k 2 (2 log n+1). For the reverse direction, assume that we have a partition Pthat has value v(P) = k 4C3(C−logn−1)+(3 C2+2C) logn+2logn 2 +k 2 (2 log n+1). By Lemmas 4.2 and 4.3, we know that, for each i∈[k], there exists a set C∈ Psuch that C⊇ {ui} ∪Iiand C\({u}∪Ii)⊆Vi. Let us reorder the sets of Psuch that P={C1, . . . , C p}andui∈Ci, for all i∈[k]. First, we calculate the maximum value of {C1, . . . , C k}. Notice that for anyi, the set Ciincludes exactly lognvertices from Viand the set {ui} ∪Ii. Therefore, we need to take into account: •logn 2 edges of weight 1, between the vertices of Vi, •lognedges of weight 3C2, between the vertices of Vianduiand •C −lognedges of weight 4C3between the vertices of Iiandui. In total, this gives us a value oflogn 2 + log n3C2+ (C −logn)4C3, and this holds for all i∈[k]. By Lemma 4.4, we also know that there exists a set CinPthat includes the vertex xtogether with the remaining vertices from the sets Vi,i∈[k]. Notice that Cmay also include up tok 2 vertices from Ve. Actually, Cmust include all these vertices, as otherwise the edges incident to them will contribute nothing to the value of P. We will calculate the value of the edges in E(C\Ve). Notice that these edges are either between two vertices in the same set Vior between a set Viandx. Since for each i∈[k]there are lognvertices from Vi, we have: 39 •logn 2 edges of weight 1, between the vertices of Vi, for each i∈[k]and •lognedges of weight 2C, between the vertices of Viandx, for each i∈[k]. Therefore, by adding these with the value from the sets Ci,i∈[k], we calculate a value ofk 4C3(C −logn) + (3C2+ 2C) logn+ 2logn 2 . Observe that the assumed value of Pis higher than the one that we have calculate for the moment, byk 2 (2 log n+ 1). Notice also that this extra value can be added only by the vertices from Vethat can be in the same set as x. Finally, any vertex v∈Ve∩C can contribute at most 2 logn+ 1since d(v) = 2 log n+ 1and all these edges have weight 1. Therefore, in order to achieve the wanted value, we have that |C∩Ve|=k 2 and N(v)⊆Cfor each vertex v∈C∩Ve. Next, we will show that there is no pair (i, j)for which there exist two edges e, e′such that{ve, ve′} ⊆C,e=uv, where u∈Viandv∈Vj,e′=u′v′, where u′∈Viandv′∈Vj. Notice that N(ve)⊆CandN(ve) =S(u)∪S(v)∪x. Therefore, C∩Vi=S(u)and C∩Vj=S(v). Since the same holds for ve′, we can conclude that S(u) =S(u′)and S(v) =S(v′). Thus, e=e′. This cannot happen because these
https://arxiv.org/abs/2505.22384v1
vertices represent edges ofHand there are no parallel edges in H. We can conclude that no two of vertices ve andv′ einCcan represent edges between vertices of the same sets. Also, since we havek 2 such vertices, for each pair (i, j)we have a vertex vethat represents an edge uvwhere u∈Viandv∈Vj. Now, consider the set of vertices U={v∈V(H)|S(v) =C∩Vifor some i∈[k]}. We claim that Uis a clique of order kinH. We will first show that for each i∈[k], we have that C∩Vi=S(v)for some v∈Vi. As we mentioned, for each pair (i, j) there exists one e=uvwhere u∈Vi,v∈Vjandve∈C. Also, N(ve)⊆Cand N(ve) =S(u)∪S(v)∪x. Therefore, C∩Vi=S(u). Since this holds for any i∈[k], we have that Uindeed represents a set of kvertices in H. It remains to show that Uinduces a clique. Consider two vertices u, v∈Uand let u∈ViandVj. Recall that for each pair (i, j), we have a vertex ve∈Csuch that e=u′v′,u∈Viandv∈Vj. Also, we have shown thatS(u′) =C∩ViandS(v′) =C\Vj. Therefore, S(u′) =S(u)andS(v′) =S(v), from which follows that e=uv. Thus, there exists an edge between the two vertices. Since we have selected uandvarbitrarily, we have that Uis indeed a clique. To fully prove the statement, it remains to be shown that the parameter that we are considering is bounded by a polynomial of k+ log n. Notice that the set U={x} ∪S i∈[k](Vi∪ {ui})is a vertex cover of G. Also, |Vi|= 2 log nfor all i∈[k]. Therefore, we have that |U| ∈ O (klogn). Recall that we have set C= 1 +k 2 +klogn. Therefore, vc+C ∈poly(k+ log n). 5. Additional Structural Parameters In this section, we consider two additional parameters, vertex integrity and twin- cover number, of the input graph. In particular, we establish that while C-Coalition Formation admit an FPT algorithm parameterized by the vertex integrity of the input graph, it remains W[1]-hard when parameterized by the twin-cover number of the input graph. 40 5.1. FPT Algorithm parameterized by vertex integrity In this section, we establish that C-Coalition Formation admits an FPT algorithm when parameterized by the vertex integrity of the input graph in the following theorem. Theorem 1.7. TheC-Coalition Formation problem can be solved in FPT time when parameterized by the vertex integrity of the input graph. Proof.LetU⊆Vbe such that |U|=k′≤kandS1, . . . , S mbe the vertex sets of the connectedcomponentsof G[V\U]. Itfollowsthat |Sj| ≤k,j∈[m]. LetP′={C′ 1, . . . , C′ p} be the strict restriction1of an optimal C-partition PofGon the set U(there are at most |U||U|≤kkpossible restrictions of PonU). We will extend P′into an optimal C-partition ofG. To do so, we will organize the connected components of G[V\U]into a bounded number of different types, and run an ILP. We begin by defining the types. Two graphs Gi=G[U∪Si]andGj=G[U∪Sj], i, j∈[m], are of the same typeif there exists a bijection2f:U∪Si→U∪Sjsuch that f(u) =ufor all u∈UandNGi(u) ={f−1(v)|v∈NGj(f(u))}for all u∈Si. Note that if such a function exists, then Giis isomorphic to Gj. LetT1, . . . ,Tℓbe the different types that were defined. Observe that ℓis at most a function of ksince|U| ≤k. For each i∈[ℓ], we define the representative of Tito be
https://arxiv.org/abs/2505.22384v1
any connected component of G[V\U]that is contained in a graph of type Ti; we will denote this graph by GTi. For each i∈[ℓ], we will store a set of vectors τi j, for j∈[q], which contain all possible orderings of all possible partitions of V(GTi)intop+ksets (some of which may be empty). If GTifollowsthe vector τi j= (α1, . . . , α p+1, . . . , α p+k), then α1, . . . , α p+kis a partition of V(GTi), andPi j={C′ 1∪α1, . . . , C′ p∪αp, αp+1, . . . , α p+k}is a possible extension of P′including the vertices that belong in any component of type i, according to the vector τi j. For every i∈[ℓ]andj∈[q], letEi j={E(Pi j)\E(P′)}(i.e., the edges of the subgraph ofGinduced by Pi j) be theimportant edges according to τi j. All that remains to be done is to search through these vectors and find the optimal ones among those that result in C-partitions. This is achieved through the following ILP. Variables xii∈[ℓ] number of components of type i yi,j i∈[ℓ], j∈[q] number of important edges according to τi j vi,j,l i∈[ℓ], j∈[q], l∈[p]number of vertices in the lthposition of vector τj i zi,j i∈[ℓ], j∈[q] number of components of type ifollowing the vector τi j Constants 1a restriction is strictif it only contains non-empty sets. 2Recall that a function f:A→Bis abijection if, for every a1, a2∈Awith a1̸=a2, we have that f(a1)̸=f(a2)and for every b∈B, there exists an a∈Asuch that f(a) =b. Recall also that the inverse function of f, denoted as f−1, exists if and only if fis a bijection, and is such that f−1:B→Aand for each b∈Bwe have that f−1(b) =a, where f(a) =b. 41 wll∈[p]number of vertices in C′ l Objective maxℓX i=1qX j=1yi,jzi,j (1) Constraints qX j=1zi,j=xi ∀i∈[ℓ] (2) ℓX i=1qX j=1vi,j,zzi,j+wl≤ C ∀ l∈[p] (3) In the above model, the constraint 2 is used to make sure that every component of type ifollows exactly one vector τi j. Then, the constraint 3 is used to make sure that the resulting partition is indeed a C-partition. Finally, since the number of variables of the model is bounded by a function of k, we can and obtain a solution in FPT time, parameterized by k(by running for example the Lenstra algorithm [50]). 5.2. Intractability for twin-cover number We close this section by proving that C-Coalition Formation isW[1]-hard when parameterized by the twin-cover of the input graph. This result is achieved through a reduction from the following problem, which was shown to be W[1]-hard when parame- terized by kin [46]. Unary Bin Packing Input: a set of items A={a1, . . . , a n}, asize function s:A→Nwhich returns the size of each item in unary encoding, and two integers Bandk. Question : Is there a way to fit items of Aintokbins, so that every bin contains items of total size exactly B, and every item is assigned to exactly one bin? Theorem 1.8. TheC-Coalition Formation problem is W[1]-hard when parameterized by the twin-cover number of G. Proof.Let(A, s, B, k
https://arxiv.org/abs/2505.22384v1
), where A={a1, . . . , a n}, be an instance of Unary Bin Packing . We construct an instance of C-Coalition Formation as follows: for each j∈[n], construct the clique Kj, which is of order s(aj). Then, for each i∈[k], add one vertex biand all the edges between biand all the vertices of the cliques Kj, for all j∈[n]. Let Gbe the resulting graph, and set C=B+ 1. Observe that the twin-cover number of G is at most k, as the set {b1, . . . , b k}is a twin-cover of G. We will show that any optimal partition Pof(G,C)has value v(P) =Pn j=1s(aj)(s(aj)−1) 2+kBif and only if (A, s, B, k ) is a yes-instance of Unary Bin Packing . For the first direction of the reduction, let (A, s, B, k )be a yes-instance of Unary Bin Packing and let f:A→[k]be the returned function assigning items to bins, such that 42 every bin contains items with total size exactly equal to B. We define a partition Pof V(G)intoksetsC1, . . . , C kas follows. For every i∈[k], the set Cicontains biand all the vertices of the clique Kjsuch that f(aj) =i, for all j∈[n]. Clearly, |Ci|=B+ 1 = Cfor every i∈[k]and, thus, Pis aC-partition of G. Moreover, E(P)contains all the edges that belong in the clique Kj, for every j∈[n], and exactly Bedges incident to bi, for each i∈[k]. In total, v(P) =|E(P)|=Pn j=1s(aj)(s(aj)−1) 2+kB. For the reverse direction, let Gbe an instance of C-Coalition Formation and P=C1, . . . , C pbe aC-partition of Gwith value v(P) =Pn j=1s(aj)(s(aj)−1) 2+kB. Let G′=G−{b1, . . . , b k}andobservethat |E(G′)|=Pn j=1s(aj)(s(aj)−1) 2. Wehavethefollowing two claims. Claim 5.1. For each i∈[k], there exists a unique ℓ∈[p]such that bi∈Cℓ. Moreover, p=kand|Cℓ|=B+ 1. Proof of the claim. In order for v(P)to have the correct value, and by the construction of G, each one of the vertices b1, . . . , b kcontributes exactly Bedges to v(P). Indeed, since |Ci| ≤B+ 1,i∈[p], no vertex can contribute more than Bedges towards v(P). Assume now that there exist i < i′∈[k]andℓ∈[p]such that biandbi′both belong to Cℓ. Then, sinceC=B+ 1and by the construction of G, we have that Cℓcontains at most B−1 edges incident to biandbi′, which is a contradiction. Finally, for all i∈[p], ifCicontains a vertex from {b1, . . . , b k}, then |Ci|=B+ 1. It also follows that p=k. ⋄ Claim 5.2. For each j∈[n], all the vertices of Kjbelong in the same set of P. Proof of the claim. In order for v(P)to have the correct value, and by the construction ofG, we have that for each j∈[n], each vertex of Kjcontributes all of its incident edges inG′towards v(P). ⋄ We are now ready to show that (A, s, B, k )is a yes-instance of the Unary Bin Pack- ingproblem. Let Pbe an optimal C-partition of G. It follows from Claim 5.1 that P consists of ksets{C1, . . . , C k}. We create the bins B1, . . . , B k. For each
https://arxiv.org/abs/2505.22384v1
j∈[n], we insert the item ajin the bin Bi,i∈[k], if and only if Kj⊆Ci. It follows from Claim 5.2 that each item of Ahas been assigned to exactly one bin. Recall that for each j∈[n], the item ajhas size equal to the order of Kj(by construction). Moreover, for each j∈[n], the item ajcorresponds exactly to the clique Kj. Thus, from Claim 5.2, we have that for each i∈k,|Ci|is equal to the sum of the orders of the cliques contained in Ci, which is exactly equal to B. It remains to show thatP aj∈Bℓs(aj) =Bfor all ℓ∈[k]. Recall that |V(Kj)|=s(aj), forj∈[n]. Let ℓ∈[k]. We have thatP aj∈Bℓs(aj) =P aj∈Bℓ|V(Kj)|. Also, |Cℓ|=P aj∈Bℓ|V(Kj)|+ 1since Cℓcontains the cliques that correspond to the items contained inBℓand one vertex from {b1, . . . , b k}. Thus,P aj∈Bℓs(aj) =|Cℓ| −1 =B. 6. Conclusion In this paper, we studied the C-Coalition Formation problem, considering both its weighted and unweighted versions, through the lens of parameterized complexity. The main takeaway message is that the problems behave relatively well in regards to 43 many widely used parameters, despite the multiple intractability results that we provided. Moreover, our intractability results provide motivation towards a more heuristic-oriented approach. From the theoretical point of view, there are many rather interesting ques- tions that are born from our research. First notice that, when we consider instance of Weighted C-Coalition Formation where C<tw we do not have any guarantee that the running time of the algorithm presented in Theorem 1.1 cannot been improved. In- deed, thelowerboundwepresentedinTheorem1.4holdsforinstanceswhere C ≥vc≥tw. Therefore, it would be interesting to investigate if, by applying more advanced techniques (like the Cut &Count technique [20]), we can solve Weighted C-Coalition Forma- tionin time CO(tw)nO(1)(which is faster than the existing algorithm when C<tw) or prove that the lower bound of Theorem 1.4 holds even when tw >C. Finally, we wonder about the existence of an FPT algorithm for Weighted C-Coalition Formation pa- rameterized by other interesting parameters that are left untouched by our work, such as the vertex integrity, the neighborhood diversity and the feedback vertex number of the input graph. 7. Acknowledgments FF and NM acknowledge the support by the European Union under the project Robotics and advanced industrial production (reg. no. CZ.02.01.01/00/22_008/0004590) and the CTU Global postdoc fellowship program. NM is also partially supported by Charles Univ. projects UNCE 24/SCI/008 and PRIMUS 24/SCI/012, and by the project 25-17221SofGAČR.HGacknowledgesthesupportbytheIDEX-ISITEinitiativeCAP20- 25 (ANR-16-IDEX-0001), the International Research Center “Innovation Transportation and Production Systems” of the I-SITE CAP 20-25, and the ANR project GRALMECO (ANR-21-CE48-0004). References [1] Alessandro Aloisio, Michele Flammini, and Cosimo Vinci. 2020. The impact of self- ishness in hypergraph hedonic games. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 34. 1766–1773. [2] Haris Aziz, Florian Brandl, Felix Brandt, Paul Harrenstein, Martin Olsen, and Do- minik Peters. 2019. Fractional hedonic games. ACM Transactions on Economics and Computation (TEAC) 7, 2 (2019), 1–29. [3] Haris Aziz, Felix Brandt, and Hans Georg Seedig. 2013. Computing desirable parti- tions in additively separable hedonic games. Artificial Intelligence 195 (2013), 316– 334. [4] Yoram Bachrach, Pushmeet
https://arxiv.org/abs/2505.22384v1
Kohli, Vladimir Kolmogorov, and Morteza Zadimoghad- dam. 2013. Optimal coalition structure generation in cooperative graph games. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 27. 81–87. 44 [5] Christer Bäckström, Yue Chen, Peter Jonsson, Sebastian Ordyniak, and Stefan Szei- der. 2012. The complexity of planning revisited—a parameterized analysis. In Pro- ceedings of the AAAI Conference on Artificial Intelligence , Vol. 26. 1735–1741. [6] Nathanaël Barrot, Kazunori Ota, Yuko Sakurai, and Makoto Yokoo. 2019. Unknown agents in friends oriented hedonic games: Stability and complexity. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 33. 1756–1763. [7] Nathanaël Barrot and Makoto Yokoo. 2019. Stable and Envy-free Partitions in He- donic Games.. In IJCAI. 67–73. [8] Christian Bessiere, Emmanuel Hebrard, Brahim Hnich, Zeynep Kiziltan, Claude Guy Quimper, andTobyWalsh.2008. Theparameterizedcomplexityofglobalconstraints. InAAAI Conference on Artificial Intelligence . AAAI Press, 235–240. [9] Vittorio Bilò, Gianpiero Monaco, and Luca Moscardelli. 2022. Hedonic games with fixed-sizecoalitions.In Proceedings of the AAAI Conference on Artificial Intelligence . [10] Hans L. Bodlaender. 1996. A Linear-Time Algorithm for Finding Tree- Decompositions of Small Treewidth. SIAM J. Comput. 25, 6 (1996), 1305–1317. https://doi.org/10.1137/S0097539793251219 [11] Hans L. Bodlaender. 1998. A partial k-arboretum of graphs with bounded treewidth. Theoretical Computer Science 209, 1 (1998), 1–45. https://doi.org/10.1016/ S0304-3975(97)00228-4 [12] Niclas Boehmer and Edith Elkind. 2020. Individual-based stability in hedonic diver- sity games. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 34. 1822–1829. [13] Anna Bogomolnaia and Matthew O Jackson. 2002. The stability of hedonic coalition structures. Games and Economic Behavior 38, 2 (2002), 201–230. [14] Felix Brandt, Martin Bullinger, and Anaëlle Wilczynski. 2023. Reaching individually stable coalition structures. ACM Transactions on Economics and Computation 11, 1-2 (2023), 1–65. [15] Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D Procaccia. 2016.Handbook of computational social choice . Cambridge University Press. [16] Robert Bredereck, Jiehua Chen, Rolf Niedermeier, and Toby Walsh. 2017. Parlia- mentary voting procedures: Agenda control, manipulation, and uncertainty. Journal of Artificial Intelligence Research 59 (2017), 133–173. [17] Martin Bullinger and Stefan Kober. 2021. Loyalty in Cardinal Hedonic Games.. In IJCAI. 66–72. 45 [18] Jiehua Chen, Gergely Csáji, Sanjukta Roy, and Sofia Simola. 2023. Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained Complexity. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023, London, United Kingdom, 29 May 2023 - 2 June 2023 . [19] Agnes Cseh, Tamás Fleiner, and Petra Harján. 2019. Pareto Optimal Coalitions of Fixed Size. Journal of Mechanism and Institution Design 4 (2019), 1. [20] Marek Cygan, Fedor V. Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. 2015. Parameterized Algo- rithms. Springer. https://doi.org/10.1007/978-3-319-21275-3 [21] Andreas Darmann, Edith Elkind, Sascha Kurz, Jérôme Lang, Joachim Schauer, and Gerhard Woeginger. 2018. Group activity selection problem with approval prefer- ences.International Journal of Game Theory 47 (2018), 767–796. [22] Xiaotie Deng and Christos H Papadimitriou. 1994. On the complexity of cooperative solution concepts. Mathematics of operations research 19, 2 (1994), 257–266. [23] Reinhard Diestel. 2012. Graph Theory, 4th Edition . Graduate texts in mathematics, Vol.
https://arxiv.org/abs/2505.22384v1
173. Springer. https://doi.org/10.1007/978-3-662-53622-3 [24] Rodney G. Downey and Michael R. Fellows. 2013. Fundamentals of Parameterized Complexity . Springer. https://doi.org/10.1007/978-1-4471-5559-1 [25] Rodney G. Downey and Michael R. Fellows. 2013. Fundamentals of Parameterized Complexity . Springer. https://doi.org/10.1007/978-1-4471-5559-1 [26] Pål Grønås Drange, Markus S. Dregi, and Pim van ’t Hof. 2016. On the Computa- tional Complexity of Vertex Integrity and Component Order Connectivity. Algorith- mica76, 4 (2016), 1181–1202. [27] Jacques H Dreze and Joseph Greenberg. 1980. Hedonic coalitions: Optimality and stability. Econometrica: Journal of the Econometric Society (1980), 987–1003. [28] Jack Edmonds. 1965. Paths, Trees, and Flowers. Canadian Journal of Mathematics 17 (1965), 449–467. https://doi.org/10.4153/CJM-1965-045-4 [29] Angelo Fanelli, Gianpiero Monaco, Luca Moscardelli, et al. 2021. Relaxed core sta- bility in fractional hedonic games. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence . 182–188. [30] M. R. Fellows. 2006. The Lost Continent of Polynomial Time: Preprocessing and Kernelization (IWPEC’06) . Springer-Verlag, Berlin, Heidelberg, 276–277. https: //doi.org/10.1007/11847250_25 46 [31] Foivos Fioravantes, Harmender Gahlawat, and Nikolaos Melissinos. 2025. Exact Algorithms and Lower Bounds for Forming Coalitions of Constrained Maximum Size. InProceedings of the AAAI Conference on Artificial Intelligence , Vol. 39. 13847– 13855. [32] Michele Flammini, Gianpiero Monaco, Luca Moscardelli, Mordechai Shalom, and Shmuel Zaks. 2018. Online coalition structure generation in graph games. In Pro- ceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. 1353–1361. [33] Jörg Flum and Martin Grohe. 2006. Parameterized Complexity Theory . Springer. https://doi.org/10.1007/3-540-29953-X [34] Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. 2019. Ker- nelization: Theory of Parameterized Preprocessing . Cambridge University Press. https://doi.org/10.1017/9781107415157 [35] Robert Ganian. 2011. Twin-Cover: Beyond Vertex Cover in Parameterized Algo- rithmics. In Parameterized and Exact Computation - 6th International Symposium, IPEC 2011 (Lecture Notes in Computer Science, Vol. 7112) . Springer, 259–271. https://doi.org/10.1007/978-3-642-28050-4_21 [36] Robert Ganian, Thekla Hamm, Dušan Knop, Šimon Schierreich, and Ondřej Such` y. 2023. Hedonic diversity games: A complexity picture with more than two colors. Artificial Intelligence 325 (2023), 104017. [37] Yong Gao. 2009. Data reductions, fixed parameter tractability, and random weighted d-CNF satisfiability. Artificial Intelligence 173, 14 (2009), 1343–1366. [38] Jiong Guo and Rolf Niedermeier. 2007. Invitation to data reduction and problem kernelization. ACM SIGACT News 38, 1 (2007), 31–45. [39] GregoryZ.Gutin, EunJungKim, ArezouSoleimanfallah, StefanSzeider, andAnders Yeo.2012. ParameterizedComplexityResultsforGeneralFactorsinBipartiteGraphs with an Application to Constraint Programming. Algorithmica 64, 1 (2012), 112– 125. [40] Tesshu Hanaka, Airi Ikeyama, and Hirotaka Ono. 2023. Maximizing Utilitarian and Egalitarian Welfare of Fractional Hedonic Games on Tree-Like Graphs. In Combina- torial Optimization and Applications - 17th International Conference, COCOA 2023, Hawaii, HI, USA, December 15-17, 2023, Proceedings, Part I . [41] Tesshu Hanaka, Hironori Kiya, Yasuhide Maei, and Hirotaka Ono. 2019. Computa- tional Complexity of Hedonic Games on Sparse Graphs. In PRIMA 2019: Principles and Practice of Multi-Agent Systems - 22nd International Conference, Turin, Italy, October 28-31, 2019, Proceedings . 47 [42] Tesshu Hanaka and Michael Lampis. 2022. Hedonic Games and Treewidth Revisited. In30th Annual European Symposium on Algorithms, ESA 2022 . Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing. [43] Danny Hermelin, Stefan Kratsch, Karolina Soltys, Magnus Wahlström, and Xi Wu. 2015.
https://arxiv.org/abs/2505.22384v1
A Completeness Theory for Polynomial (Turing) Kernelization. Algorithmica 71, 3 (2015), 702–730. [44] Ayumi Igarashi, Kazunori Ota, Yuko Sakurai, and Makoto Yokoo. 2019. Robustness against agent failure in hedonic games. In Proceedings of the 28th International Joint Conference on Artificial Intelligence . 364–370. [45] Russell Impagliazzo and Ramamohan Paturi. 2001. On the Complexity of k-SAT. J. Comput. Syst. Sci. 62, 2 (2001), 367–375. https://doi.org/10.1006/JCSS.2000. 1727 [46] Klaus Jansen, Stefan Kratsch, Dániel Marx, and Ildikó Schlotter. 2013. Bin packing with fixed number of bins revisited. J. Comput. System Sci. 79, 1 (2013), 39–49. https://doi.org/10.1016/j.jcss.2012.04.004 [47] Ton Kloks. 1994. Treewidth, Computations and Approximations . Lecture Notes in Computer Science, Vol. 842. Springer. https://doi.org/10.1007/BFb0045375 [48] Tuukka Korhonen and Daniel Lokshtanov. 2023. An Improved Parameterized Algo- rithm for Treewidth. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023 . ACM, 528–541. https://doi.org/10.1145/3564246. 3585245 [49] Euiwoong Lee. 2017. Partitioning a graph into small pieces with applications to path transversal. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms . SIAM, 1546–1558. [50] H. W. Lenstra Jr. 1983. Integer Programming with a Fixed Number of Variables. Mathematics of Operations Research 8, 4 (1983), 538–548. [51] Chaya Levinger, Amos Azaria, and Noam Hazon. 2023. Social Aware Coalition Formation with Bounded Coalition Size. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems . 2667–2669. [52] FuLi.2021. FractionalHedonicGamesWithaLimitedNumberofCoalitions.In Pro- ceedings of the 22nd Italian Conference on Theoretical Computer Science, Bologna, Italy, September 13-15, 2021 . [53] Silviu Maniu, Pierre Senellart, and Suraj Jog. 2019. An experimental study of the treewidth of real-world graph data. In ICDT 2019–22nd International Conference on Database Theory . 48 [54] Gianpiero Monaco and Luca Moscardelli. 2023. Nash Stability in Fractional Hedo- nic Games with Bounded Size Coalitions. In International Conference on Web and Internet Economics . [55] Gianpiero Monaco, Luca Moscardelli, and Yllka Velaj. 2021. Additively Separable Hedonic Games with Social Context. Games12, 3 (2021), 71. [56] RolfNiedermeier.2006. Invitation to Fixed-Parameter Algorithms . OxfordUniversity Press. https://doi.org/10.1093/ACPROF:OSO/9780198566076.001.0001 [57] Rolf Niedermeier and Peter Rossmanith. 2000. A general method to speed up fixed- parameter-tractable algorithms. Inform. Process. Lett. 73, 3-4 (2000), 125–129. [58] Kazunori Ohta, Nathanaël Barrot, Anisse Ismaili, Yuko Sakurai, and Makoto Yokoo. 2017. Core Stability in Hedonic Games among Friends and Enemies: Impact of Neutrals.. In IJCAI. 359–365. [59] Martin Olsen. 2009. Nash stability in additively separable hedonic games and com- munity structures. Theory of Computing Systems 45 (2009), 917–925. [60] DominikPeters.2016. Graphicalhedonicgamesofboundedtreewidth.In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 30. [61] Dominik Peters. 2016. Towards Structural Tractability in Hedonic Games. In Pro- ceedings of the AAAI Conference on Artificial Intelligence . [62] Dominik Peters and Edith Elkind. 2015. Simple causes of complexity in hedonic games. In Proceedings of the 24th International Conference on Artificial Intelligence . 617–623. [63] Walid Saad, Zhu Han, Tamer Basar, Mérouane Debbah, and Are Hjorungnes. 2010. Hedonic coalition formation for distributed task allocation among wireless agents. IEEE Transactions on Mobile Computing 10, 9 (2010), 1327–1344. [64] Jakub Sliwinski and Yair Zick. 2017. Learning Hedonic Games.. In IJCAI. 2730–
https://arxiv.org/abs/2505.22384v1
arXiv:2505.22387v1 [cs.CV] 28 May 2025DAM: Domain-Aware Module for Multi-Domain Dataset Condensation Jaehyun Choi Gyojin Han Dong-Jae Lee Sunghyun Baek Junmo Kim Korea Advanced Institute of Science and Technology (KAIST) {chlwogus, hangj0820, jhtwosun, baeksh, junmo.kim}@kaist.ac.kr Abstract Dataset Condensation (DC) has emerged as a promising solution to mitigate the computational and storage burdens associated with training deep learning models. However, existing DC methods largely overlook the multi-domain nature of modern datasets, which are increasingly composed of heterogeneous images spanning multiple domains. In this paper, we extend DC and introduce Multi-Domain Dataset Condensation (MDDC), which aims to condense data that generalizes across both single-domain and multi-domain settings. To this end, we propose the Domain- Aware Module (DAM), a training-time module that embeds domain-related features into each synthetic image via learnable spatial masks. As explicit domain labels are mostly unavailable in real-world datasets, we employ frequency-based pseudo- domain labeling, which leverages low-frequency amplitude statistics. DAM is only active during the condensation process, thus preserving the same images per class (IPC) with prior methods. Experiments show that DAM consistently improves in- domain, out-of-domain, and cross-architecture performance over baseline dataset condensation methods. 1 Introduction Over the past decade, deep learning models have grown substantially in capacity, achieving remarkable progress in diverse tasks across vision, language, and multi-modal domains. This performance growth has been tightly coupled with the dataset size. To meet this demand, data collection has shifted from manual curation to automated web crawling, yielding datasets that are not only large in size but also highly heterogeneous. These datasets often consist of samples drawn from various domains with drastically different visual characteristics, including changes in texture, lighting, color distribution, and abstraction level. While such diversity benefits model robustness, it also raises new challenges for training efficiency and data quality management at scale. Dataset Condensation (DC) has emerged as a promising direction to reduce training cost by synthe- sizing a small set of highly informative samples. First introduced by Wang et al. [ 1], DC replaces the original training data with a compact synthetic dataset, optimized to preserve the training dynamics of real data. Recent DC methods improve this core idea using gradient matching [ 2], distribution alignment [ 3], or trajectory matching [ 4], and have shown strong results on curated benchmarks such as CIFAR [ 5]. These approaches significantly reduce training time and memory usage. However, nearly all existing methods assume that the dataset is homogeneous in style. This assumption breaks down in many realistic scenarios, where datasets contain im- ages from multiple visual domains (e.g., photo, logo drawing, advertisement, art-painting, etc.) due to their mixed-source construction. When DC is applied to such mixed- domain data, synthetic images often collapse toward dominant domain styles, leading to de- graded performance. Figure 1 illustrates this problem on the PACS [ 6] dataset, which includes 7 classes across 4 domains (e.g., Art-Painting, Cartoon, Photo, and Sketch). Preprint. Under review. Single−Domain Setting (Trained on only Cartoon domain in PACS )Multi−Domain Setting (Trained on whole PACS dataset )Performance Difference in Cartoon Domain 1020304050 DC DM MTT MethodsClassification Accuracy (%) 0−𝟏𝟓.𝟕−𝟏𝟒.𝟗−𝟏𝟏.𝟔Figure 1: Performance
https://arxiv.org/abs/2505.22387v1
of single- and multi-domain training for existing dataset condensation meth- ods (DC, DM, MTT) on the PACS dataset under a 10 images per class setting. In the single-domain setup, models are trained using only Cartoon do- main images, assuming access to explicit domain labels. In contrast, the multi-domain dataset set- ting trains on the full PACS dataset without do- main supervision, reflecting modern datasets. In all prior methods, the performance drop in the multi- domain setting was significant.We compare existing DC methods under two training settings, evaluating all models on the Cartoon domain. In the single-domain setting , the Cartoon domain is isolated using oracle do- main labels, resulting in higher accuracy, but this relies on metadata that is typically unavailable in a real-world setting. In contrast, the multi- domain setting uses the full PACS dataset with- out domain separation, reflecting a more real- istic scenario. Here, all prior methods suffer significant performance drops, often exceeding 10%p. One might consider mitigating this by condensing data separately per domain or us- ing ground-truth domain labels during condensa- tion. However, both strategies are fundamentally flawed within the DC framework. Per-domain condensation scales linearly with the number of domains, inflating the synthetic data size, while relying on domain labels assumes costly and often unavailable annotations. To fill this gap, we introduce the Multi-Domain Dataset Condensation (MDDC) task with the goal of synthesizing a single compact dataset that performs well across both single-domain and multi-domain settings, without explicit do- main labels or increasing the Images per Class (IPC). To solve this, we propose the Domain-Aware Module (DAM), a training-time module that equips each synthetic image with the capacity to rep- resent multiple domain styles through learnable spatial masks. DAM enables the model to assign domain-specific features to different regions within an image, effectively encoding domain variation without requiring per-domain images. As explicit domain labels are unavailable, we propose a lightweight pseudo-domain labeling scheme based on frequency-domain characteristics. Specifically, we assign pseudo-domain labels by sorting real images by the mean amplitude of their low-frequency FFT components, a heuristic inspired by its success in domain adaptation [ 7,8]. These pseudo-labels are used to supervise DAM during training and are discarded afterward. DAM does not increase the number of synthetic images and introduces no additional overhead when training downstream models. Through comprehensive experiments on five datasets, including CIFAR-10, CIFAR-100, Tiny ImageNet, PACS, and VLCS, and across architectures, including ConvNet, VGG, and ViT, we show that prior methods’ performance deteriorates in the multi-domain setting and that DAM consistently improves performance in both in-domain and cross-domain evaluation settings, without compromising the efficiency goals of dataset condensation. 2 Related Works 2.1 Dataset Condensation As machine learning models have become larger and more complex, the amount of data required for training those models has also grown significantly. In this context, the emergence of massive datasets has greatly increased the burden on computational resources and training time, creating a bottleneck in model development. Dataset distillation [ 1] is a formulation proposed to address this issue by compressing a large dataset into a much smaller
https://arxiv.org/abs/2505.22387v1
synthetic dataset while still maintaining the essential data characteristics of the original dataset for training deep learning models. This approach drastically reduces training time and computational costs, allowing models trained on the condensed dataset to achieve performance comparable to those trained on the original, large-scale datasets. Among various strategies in dataset condensation, including gradient matching methods [ 1,2,9], approaches based on distribution matching [ 3,10], trajectory matching [ 4,11], and generative-model-based approaches leveraging GANs or diffusion models [ 12,13], we focus on gradient matching, distribution matching, and trajectory matching. Gradient Matching Dataset distillation methods based on gradient matching aim to match the gradients of a neural network that are calculated for a loss function over a synthetic dataset and 2 the original dataset for the purpose of dataset condensation. DC [ 2] first formulated the dataset distillation as a minimization problem between gradients that are calculated from an original dataset and a condensed dataset. IDC [ 9] improved data condensation by efficiently parameterizing synthetic data to preserve essential characteristics with a smaller dataset, and they generated multiple formations of data to maintain model performance while significantly reducing storage and computation costs. Zhang et al. [ 14] accelerated the distillation process by utilizing models in the early stages of training, rather than calculating gradients with randomly initialized models as in existing gradient-matching- based dataset distillation methods. To address the resulting lack of model diversity, they introduced a model augmentation technique by adding small perturbations to the parameters of selected early-stage models. Distribution Matching Dataset distillation methods based on distribution matching were proposed to overcome the limitations of gradient matching methods, which require complex optimization and high computational costs. DM [ 3] introduced a method that aligns the distribution of the original and synthetic datasets in embedding space, significantly improving the efficiency of dataset distillation and enabling condensed datasets to retain performance close to that of the original data, even with fewer data points. IDM [ 10] enhanced distribution matching by addressing class imbalance and embedding issues. They introduced new techniques, including split-and-augment augmentation, enhanced model sampling, and class-aware distribution normalization, to improve the diversity and representativeness of condensed datasets. Trajectory Matching MTT [ 4] developed a method to create condensed datasets by mimicking the training trajectories of models trained on the original dataset. By aligning the synthetic dataset’s training path with that of the original data, they significantly improved the efficiency of dataset distillation. FTD [ 15] improved trajectory matching by addressing the accumulated trajectory error, which often led to discrepancies between training and evaluation performance. DATM [ 11] addressed limitations in prior dataset distillation methods by introducing difficulty-aligned trajectory matching. This approach enabled effective distillation without performance loss, even as the synthetic dataset size changes, and overcomes issues with prior methods’ inability to adapt to different pattern difficulties. 2.2 Domain-Aware Learning Approaches Research in domain-aware learning is crucial to addressing performance degradation caused by discrepancies between different domains. Machine learning models tend to perform optimally when the distribution of training data matches that of test data. However, in real-world applications, data is
https://arxiv.org/abs/2505.22387v1
often collected across various domains with distinct distributions. These domain shifts can significantly impact a model’s generalization performance; without addressing these differences, models may only be effective in limited, specific environments. Two prominent approaches to mitigate this issue are domain adaptation and domain generalization. Domain adaptation [ 16–19] focuses on improving the model’s performance on a target domain by leveraging knowledge from a source domain where training data is available. This typically involves techniques that reduce distributional differences between the source and target domains or map features from both domains onto a common representation. In contrast, domain generalization [ 20–24] aims to build a model that can generalize to new, unseen domains without direct access to their data. Domain generalization methods utilize multiple source domains to create a robust model that would perform equally well in various unseen domains. Our plug-and-play method for multi-domain dataset condensation is related to previous domain-aware learning methods as it differentiates domains within the training dataset and considers possible domain shifts. To the best of our knowledge, Domain-Aware Module (DAM) is the first work to incorporate domain-awareness into dataset condensation, bridging a previously unexplored gap between dataset condensation and multi-domain dataset. 3 Method Given a dataset Dreal={xn, yn}N n=1where yn∈ {0,···, C−1}, single-domain dataset conden- sation aims to synthesize a much smaller synthetic dataset Dsyn={˜xm,˜ym}M m=1where M≪N such that Dsynhas the same or similar power as Drealin terms of model training. In Multi-Domain Dataset Condensation (MDDC), it takes a step further and encodes domain variability within the Dsyn without explicit domain labels while preserving the class-discriminative features. 3 𝒟𝑟𝑒𝑎𝑙𝐅𝐫𝐞𝐪𝐮𝐞𝐧𝐜𝐲 -𝐛𝐚𝐬𝐞𝐝 𝐏𝐬𝐞𝐮𝐝𝐨 𝐃𝐨𝐦𝐚𝐢𝐧 𝐋𝐚𝐛𝐞𝐥𝐢𝐧𝐠Pseudo Domain #D−1 Pseudo Domain #0 ⊙𝑓𝑑𝑜𝑚𝑎𝑖𝑛 Backpropagation𝐃𝐨𝐦𝐚𝐢𝐧 𝐀𝐰𝐚𝐫𝐞 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 +DAM𝐂𝐥𝐚𝐬𝐬 𝐀𝐰𝐚𝐫𝐞 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 Conventional 𝑓𝑐𝑙𝑎𝑠𝑠 Images per Class Softmax𝒛𝑚0,𝑖𝒛𝑚𝐷−1,𝑖 ෤𝑥𝑚𝑖∈𝒟𝑠𝑦𝑛 Figure 2: DAM incorporates both class-aware training (left) from prior methods and domain-aware training (right), the proposed DAM. 3.1 Domain-Aware Module Each synthetic image ˜xm∈RH×W×3is paired with a learnable domain mask zd,i m∈RH×W×3, d= {0, . . . , D −1}, where Dis the number of pseudo domains and idenotes the current iteration. We initialize all elements in the domain mask with 0.01 at i= 0. To prevent a single domain dominating the synthetic image, we leverage a per-pixel temperatured softmax to generate relative importance of each domain to each synthetic image, as well as to balance the domain importance among the zd,i mas follows: αd,i m=exp(zd,i m/τ)PD−1 d′=0exp(zd′,i m/τ), (1) where αd,i m∈RH×W×3andτis the temperature factor in the softmax function. Through αd,i m, a synthetic image saliency map with domain dat iteration iis obtained as ˜xd,i m= ˜xi m⊙αd,i m, (2) where ⊙is element-wise multiplication, and this satisfies the exact reconstruction identity as ˜xi m=D−1X d=0˜xd,i m, (3) sincePD−1 d=0αd,i m= 1as it is output of softmax function. Through a domain-aware module, several domains can coexist in disjoint spatial regions without information loss. Note that zd,i mis trained along with ˜xi mduring training. As both zd,i mand˜xi mare trained, ˜xd,i mare updated by Equation 2. 3.2 Frequency-Based Pseudo Domain Labeling In many curated multi-domain benchmarks (e.g., PACS, Office-Home), explicit domain labels are available as
https://arxiv.org/abs/2505.22387v1
domain differences are mostly distinguishable. However, for unconstrained web data or large mixed datasets, explicit domain labels are mostly unavailable, primarily because the goal of the dataset is not for classifying the domains but also because the distinction for each domain is vague or overlapping. We first define domain as variation not attuned with task-relevant information, in this case class-discriminative features, and leverage Fast Fourier Transformation (FFT) to extract domain-specific information for each image in Drealas theoretically supported and applied in prior domain adaptation and domain generalization [ 8,7]. For every real image xnthe discrete 2-D Fourier transform as follows F(xn)[u, v] =H−1X h=0W−1X w=0xn[h, w]e−j2π(uh/H +vw/W ), (4) 4 which is computed per color channel. Through shifting, the center of the amplitude becomes the low-frequency region, which prior domain adaptation and domain generalization methods leveraged for domain-specific information. Likewise, we crop the central region with a cropping ratio βand get the mean of the amplitude, µn, as follows: µn=1 3β2HW(Crop β{|F(xn)|shifted} ∈RβH×βW×3). (5) Sorting {µn}N n=1in ascending or descending order and slicing it into Dequal parts assigns pseudo- domain labels dn=jranking (µn)−1 N/Dk , d n∈ {0, . . . , D −1},ranking (µn)∈ {1, ...N}. 3.3 Training objective Being a plug-and-play module, we leverage the same loss function from the prior base models Lbase and the loss for the class becomes Lcls=Lbase(Θ;Dreal,Dsyn), (6) where Θis the parameters needed for the loss computation. Accordingly, we define the domain loss as Ldom=Lbase(Θ′;Dreal,Ddom syn). (7) Here, parameters for the domain loss are denoted as Θ′. TheDdom syn=˜xd m,˜ym 1≤m≤M,0≤d<D where ˜xd m= ˜xm⊙αd m, is used solely to supervise the Domain-Aware Module (DAM) during the condensation phase. Notably, the domain masks zd,i mused to compute αd mare discarded after condensation. As a result, only ˜xm∈ Dsynis used during downstream training, ensuring that the number of synthetic images remains unchanged and the Images per Class is preserved. The architecture for domain loss is identical to class loss, but the parameters are initialized differently. Also, note that the Drealfor class and domain loss is the same while the batch configuration differs. For the class loss, the batch is grouped by the class label following the prior methods, while it is grouped by the pseudo-domain label for the domain loss. To sum up, the final loss becomes Ltotal=Lcls+λLdom, (8) where λis the weighting factor. Lclsprovides gradients for updating ˜xi mandLdomfor updating both ˜xi mandzd,i m. Parameters ΘandΘ′are randomly initialized or frozen after being trained on real data, depending on the prior method (Details are given in the supplementary material A). The overall pipeline is illustrated in Figure 2. 4 Experiments 4.1 Dataset We evaluate our plug-and-play method, DAM, on 32×32CIFAR-10 and CIFAR-100 [ 5], and 64×64 Tiny ImageNet [ 25], the three most commonly used datasets in the field of dataset condensation. The experiment setting with these datasets is single-domain setting . Additionally, we employ 64×64 PACS [ 6], VLCS [ 26], and Office-Home [ 27] datasets that are commonly used in the field of domain adaptation (DA) and domain generalization (DG). These multi-domain datasets have
https://arxiv.org/abs/2505.22387v1
four distinct domains and are leveraged not only to validate the effectiveness of DAM in multi-domain setting but also to better analyze the differences between single- and multi-domain dataset settings. We note that the provided domain labels are not leveraged unless explicitly stated in the experiment setting. 4.2 Implementation Details We implement DAM on three pioneering prior methods, DC [ 2], DM [ 3], and MTT [ 4], in gradient matching, distribution matching, and trajectory matching dataset condensation, respectively. For a fair comparison, we follow the conventional experiment settings employing ConvNet architecture [ 28] while varying the depth of the network depending on the image size of the Dreal. More specifically, three-depth ConvNet is utilized for all experiments with the CIFAR-10 and CIFAR-100 datasets, 5 Table 1: Results with and without DAM on the prior methods on the single-domain setting. “T.Image.” denotes Tiny ImageNet dataset. All results are the average of 10 runs and reported as mean ±standard deviation. Dataset CIFAR-10 CIFAR-100 T.Image. Img/Cls 1 10 50 1 10 50 1 Ratio (%) 0.02 0.2 1 0.2 2 10 0.2 Random 12.5 ±0.8 25.1±1.4 42.5±0.5 3.7±0.2 13.9±0.3 29.0±0.3 1.3±0.1 DC 27.4 ±0.2 43.3±0.3 53.0±0.3 12.2±0.3 24.8±0.3 - - DC + DAM 29.0 ±0.5 45.4±0.3 54.5±0.2 13.0±0.2 25.8±0.1 - - DM 24.7 ±0.3 47.4±0.4 58.2±0.1 10.9±0.2 29.2±0.2 36.5±0.2 3.7±0.1 DM + DAM 27.1 ±0.3 49.8±0.5 59.5±0.2 11.8±0.2 30.0±0.1 37.3±0.2 4.2±0.1 MTT 41.9 ±0.4 50.7±0.8 - 15.8 ±0.3 35.3±0.2 - 4.8 ±0.3 MTT + DAM 46.8 ±0.4 57.9±0.4 - 24.0±0.3 35.9±0.2 - 5.7±0.2 Whole Dataset 84.8 ±0.1 56.2±0.3 37.6±0.4 Table 2: Results with and without DAM on the prior methods on the multi-domain setting. All results are the average of 10 runs and reported as mean ±standard deviation. Dataset PACS VLCS Office-Home Img/Cls 1 10 1 10 1 10 Ratio (%) 0.08 0.8 0.07 0.7 0.46 4.6 Random 18.1 ±2.6 33.0±0.8 17.3±2.2 27.0±1.6 3.9±0.3 12.9±0.6 DC 35.3 ±0.6 46.1±0.7 29.6±0.9 39.0±0.6 11.0±0.3 - DC + DAM 38.8 ±0.7 48.3±0.5 34.8±1.0 42.7±0.5 12.4±0.4 - DM 28.7 ±0.5 46.7±0.5 29.1±1.7 42.0±0.3 9.0±0.3 25.5±0.3 DM + DAM 34.7 ±1.1 50.9±0.4 36.7±1.1 44.4±0.3 10.4±0.3 27.2±0.3 MTT 39.7 ±0.6 45.9±0.8 28.5±2.1 - 13.8 ±0.2 - MTT + DAM 46.6 ±0.9 50.6±0.6 39.7±1.8 - 16.3±0.2 - Whole Dataset 72.0 ±0.8 60.8±0.6 50.4±0.8 while all the other datasets leverage four-depth ConvNet. All of the hyperparameters introduced in each prior method are set identically, and the learning rate for the DG datasets is set equal to the Tiny ImageNet setting. We note that Dsynis initialized with Gaussian noise in all of our experiments rather than initializing with realimage from Drealas some prior works do. However, we demonstrate that the performance gap still persists even when initializing with realimage in the supplementary material B. zd,i mis initialized with 0.01 and the temperature τis set to 0.1 when applying softmax among the Dmasks for all experiments. The domain embedding weight λis set to 0.1 for DC and DM and 0.01 for the MTT. All of the hyperparameter sweep experiments (e.g., zminitial value, domain embedding weight λ, and temperature τvalue) can be found in the
https://arxiv.org/abs/2505.22387v1
supplementary material C. Dis set to 4 for all experiments. Finally, we follow DM [ 3] for the evaluation protocol for all the experiments, and the results presented in the tables are the average of 10 evaluation results. 4.3 Results Main results Table 1 demonstrates the performance on the single-domain setting with three commonly utilized benchmarks in dataset condensation by varying Image per Class (IPC) and prior dataset condensation methods employed along with DAM. Similarly, in Table 2, we showcase the performance on three commonly used benchmarks in DA and DG for the multi-domain setting. Note, we use all of the domains in the dataset for training and evaluating the DG datasets. The “-” in the tables denotes the experiment setting, which either 1) was not done in the original paper or 2) requires extensive computational resources beyond our limit. All reported experiments show performance improvements when leveraging DAM with the prior methods. This consistent improvement confirms that DAM effectively enriches condensed data with domain-specific structure while preserving class- discriminative information, the core objective in classification task. Despite the risk that embedding additional domain cues might interfere with class semantics, the observed gains demonstrate that 6 𝐂𝐂𝐂𝐂𝐂𝐂𝐂𝐂𝐂𝐂−𝟏𝟏𝟏𝟏 (𝟏𝟏𝟏𝟏 𝐂𝐂𝐈𝐈𝐂𝐂) 𝐈𝐈𝐂𝐂𝐂𝐂𝐏𝐏 (𝟏𝟏𝟏𝟏 𝐂𝐂𝐈𝐈𝐂𝐂)Figure 3: Visualization of the final output in CIFAR-10 and PACS under 10 IPC setting. The shown images are condensed with DC+DAM. More outputs can be found in the supplementary material. Table 3: Cross-architecture results with condensed CIFAR-10 data under 10 IPC with ConvNet on various architectures. All results are the average of 10 runs and reported as mean ±standard deviation. Method ConvNet AlexNet VGG ResNet18 ViT-Tiny ViT-Small DC 43.3±0.3 15.0±3.4 34.6±0.2 18.7±0.5 21.7±0.6 21.7±0.5 DC + DAM 45.4±0.3 22.8±1.2 35.9±0.4 19.5±0.6 22.4±0.4 22.4±0.4 DM 47.4±0.4 36.1±0.4 39.9±0.3 36.9±0.8 26.6±0.5 27.1±0.5 DM + DAM 49.8±0.5 39.0±0.3 40.9±0.7 39.8±1.0 26.9±0.5 27.4±0.4 MTT 50.7±0.8 23.2±1.3 45.7±0.8 38.9±0.8 20.3±1.4 22.5±0.7 MTT + DAM 57.9±0.4 24.0±1.0 46.6±0.9 41.1±0.7 20.5±0.8 22.8±1.0 DAM successfully integrates domain context in a way that reinforces, rather than disrupts, the underlying class structure as intended. We visualize the final condensed synthetic data for CIFAR-10 and PACS datasets under 10 IPC setting on DC+DAM in Figure 3. Note that Ddom synis used only during dataset condensation process to generate Dsynand that all the results with DAM in Tables 1, 2, and 3 are obtained when the model is trained with only the dataset Dsynthat has Mdatapoints i.e. without using Ddom symat all. Cross-architecture generalization We assess the generalization capabilities of condensed syn- thetic data across different architecture frameworks. Following MTT [ 4], we experiment on ConvNet, AlexNet, VGG11, and ResNet-18. Furthermore, we experiment on ViT-Tiny and ViT-Small, which prior methods did not experiment on. The cross-architecture experiments are conducted with con- densed CIFAR-10 data under 10 IPC with ConvNet. As shown in Table 3, incorporating DAM shows superior generalization performance across methods and architectures compared to those without DAM, demonstrating the robustness across architecture. 5 Discussion Single-domain and multi-domain dataset The need for Multi-Domain Dataset Condensation (MDDC) methods has been highlighted in Section 1 as a figure. We further extend the experiment
https://arxiv.org/abs/2505.22387v1
on the same setting and show the results in Table 4. For a single-domain dataset setting, we isolate the target domain with an explicit domain label for condensing and evaluating. On the other hand, for a multi-domain dataset setting, the whole PACS dataset (i.e., all four domains) is utilized for the training. The evaluation was done on the same target domains for both single- and multi-domain dataset settings. In most of the results, the single-domain dataset setting performed much better than the multi-domain dataset settings, demonstrating the need for multi-domain dataset consideration in DC. Notably, each prior pioneering method with the DAM always performed better than without DAM, and in cases such as DC and DM with the Art-Painting as the target domain, the performance was on-par with the single-domain dataset setting. Most importantly, the performance gap between the single-domain setting and the multi-domain setting substantially declined with DAM. 7 Table 4: Experiment results of single- and multi-domain dataset settings. In a single-domain dataset setting, the target data is used during the condensation process, whereas in a multi-domain dataset setting, the whole PACS dataset is utilized. The evaluation is done in the target domain images for both settings. The value inside the parentheses denotes the difference between Multi-Domain with DAM and without DAM. Method ( ↓) / Target Domain ( →) Photo Art-Painting Cartoon Sketch Single-Domain DC 50.6 29.6 53.7 43.8 Multi-DomainDC 48.1 27.6 38.0 32.1 DC + DAM 49.4 (+1.3) 30.4 (+2.8) 40.4 (+2.4) 37.4 (+5.3) Single-Domain DM 50.7 29.5 50.2 35.4 Multi-DomainDM 46.8 21.3 35.4 22.6 DM + DAM 47.4 (+0.6) 29.3 (+8.0) 37.1 (+1.7) 30.9 (+8.3) Single-Domain MTT 55.2 31.9 55.1 42.3 Multi-DomainMTT 50.7 24.5 40.8 44.6 MTT + DAM 52.1 (+1.4) 26.9 (+2.4) 50.5 (+9.7) 50.9 (+6.3) Table 5: Leave-one-domain-out evaluation on PACS, VLCS, and Office-Home datasets with 1 IPC using DM and DM+DAM. Target domains are abbreviated as: PACS — (P)hoto, (A)rt-Painting, (C)artoon, (S)ketch; VLCS — Pascal (V)OC, (L)abelMe, (C)altech, (S)un; Office-Home — (A)rt, (C)lipart, (P)roduct, (R)eal-World. Dataset PACS VLCS Office-Home Target Domain P A C S V L C S A C P R DM 29.9 18.9 20.6 22.5 24.6 33.9 21.8 33.0 3.3 6.3 7.0 5.8 DM+DAM 44.4 24.4 27.7 30.7 26.9 40.0 26.0 36.3 5.2 7.4 9.7 7.1 Table 6: Comparison of different pseudo-domain labeling strategies on the CIFAR-10, PACS, and VLCS datasets under 1 and 10 IPC. All results are the average of 10 runs and reported as mean ±standard deviation. FFT: frequency feature extraction; log-Var: log-variance of early features; Mean-Sort: ordering features by mean value; K-Means: clustering features with K-Means. Baselines include random pseudo-labels and actual domain labels. Pseudo Domain Labeling Method Method FFT log-Var Mean-Sort K-MeansCIFAR-10 PACS VLCS 1 IPC 10 IPC 1 IPC 10 IPC 1 IPC 10 IPC ✓ ✓ 27.1±0.349.8±0.534.7±1.150.9±0.436.7±1.144.4±0.3 ✓ ✓ 26.6±0.449.7±0.332.3±0.649.7±0.736.8±1.144.3±0.4 ✓ ✓ 27.0±0.349.8±0.233.6±0.849.7±0.434.0±0.844.1±0.4 ✓ ✓ 26.5±0.449.4±0.333.5±0.648.8±0.635.3±1.144.0±0.4 Random Pseudo Labels 25.3 ±0.548.1±0.731.7±1.448.0±1.231.1±1.842.4±0.7 Actual Domain Labels - - 34.0 ±1.750.6±0.634.5±1.543.9±0.5 Leave-one-domain-out evaluation In this section, we evaluate whether embedding domain infor- mation into each synthetic image improves generalization to unseen domains beyond the training set. For the
https://arxiv.org/abs/2505.22387v1
experiment, we tested on the three domain generalization benchmarks and compared DM with DM+DAM under 1 IPC using explicit domain labels only to isolate the target domain, which is only used during evaluation and neglected during the condensation process. As can be seen from Table 5, employing DAM with DM performed better with a substantial gap. This validates that employing DAM substantially increases the generalization ability of the condensed data through embedding informative and non-overlapping domain information. These results showcase the possibility of using DAM even for domain adaptation and domain generalization, where the burden of gathering data is much more costly. Various pseudo-domain labeling To evaluate the effectiveness of our pseudo-domain labeling strategy, we further experimented with log-variance (log-var) for extracting domain-specific features and K-Means clustering for clustering the extracted features to assign pseudo-domain labels. The 8 feature for log-variance is extracted from the first and second layers of the three-depth ConvNet. Ad- ditionally, we compare the results with the random pseudo-domain labeling and actual domain labels for the available datasets, PACS and VLCS. The random pseudo-domain labeling is done by assigning a pseudo-domain label for each synthetic image not pixel-wise as done in DAM. The experiments are conducted using DM+DAM across three datasets under 1 and 10 IPC and the results are shown in Table 6. Across all datasets and IPC configurations, FFT-based feature extraction consistently outper- forms log-variance, regardless of the clustering strategy applied. Notably, the combination of FFT and Mean-Sort achieves the highest performance and even surpasses the use of actual domain labels. 𝐈𝐏𝐂 =𝟏𝟎 𝐈𝐏𝐂 =𝟏 𝐂𝐈𝐅𝐀𝐑𝟏𝟎 Number of Domains 𝐷 Number of Domains 𝐷Accuracy (%) Accuracy (%) DM +DAMDMDM +DAMDM 𝐏𝐀𝐂𝐒 Number of Domains 𝐷 Number of Domains 𝐷Accuracy (%) Accuracy (%) DM +DAMDMDM +DAMDM Figure 4: Experiment with a varying number of domains D on CIFAR-10 and PACS dataset under 1 and 10 IPC with DM and DM+DAM.In contrast, random pseudo-domain la- beling yields the lowest performance among all variants, though it still per- forms better than DM without DAM, highlighting the value of incorporat- ing even weak domain information. Effect of the Number of Pseudo Do- mains We analyze the impact of the number of pseudo domains Don the performance of DM+DAM across CIFAR-10 and PACS under both 1 and 10 IPC. As shown in Figure 4, the dashed red line indicates the perfor- mance of the baseline DM method, while the solid blue curve shows the performance of DM+DAM as D varies. In all settings, DM+DAM con- sistently outperforms DM, demonstrating the effectiveness of incorporating domain-specific informa- tion during condensation. Notably, for PACS, which has four explicit annotated domains, the best performance is observed when D= 4in both IPC settings, suggesting that the number of pseudo domains aligned with the true domain numbers is especially beneficial. On the other hand, the optimal number of pseudo domains in CIFAR-10 varies across settings, indicating that the best partitioning may depend on the nature of the dataset and the number of images per class, however, we emphasize that even with random number of domains D, we
https://arxiv.org/abs/2505.22387v1
achieve better performance than basline methods (i.e. w/o DAM). 6 Conclusion In this work, we introduce Multi-Domain Dataset Condensation (MDDC), the first framework to explicitly tackle dataset condensation under multi-domain settings. To tackle this setting, we propose the Domain-Aware Module (DAM), a plug-and-play component that embeds domain-specific information into synthetic data. Unlike prior methods that focus solely on class preservation, DAM leverages domain masking and FFT-based pseudo-domain labeling to preserve domain diversity, improving both in-domain performance and out-of-domain generalization. Extensive experiments across various IPC settings, datasets, and architectures confirm the effectiveness of our approach. While DAM introduces additional components such as domain mask optimization and pseudo-label assignment during training, it establishes a strong foundation for future research in multi-domain dataset condensation. Limitation DAM introduces additional parameters and a domain-specific loss, which increase condensation time and memory usage. On the PACS dataset under 1 IPC, our method roughly doubles the condensation time due to the added domain classifier, while memory usage increases by only 0.1GB. Since condensation is a one-time process and its purpose is to reduce downstream training time, this overhead is typically negligible in practice. Detailed measurements are provided in the supplementary material E. 9 References [1]T. Wang, J.-Y . Zhu, A. Torralba, and A. A. Efros, “Dataset distillation,” arXiv preprint arXiv:1811.10959 , 2018. [2]B. Zhao, K. R. Mopuri, and H. Bilen, “Dataset condensation with gradient matching,” in International Conference on Learning Representations , 2021. [3]B. Zhao and H. Bilen, “Dataset condensation with distribution matching,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , 2023, pp. 6514–6523. [4]G. Cazenavette, T. Wang, A. Torralba, A. A. Efros, and J.-Y . Zhu, “Dataset distillation by matching training trajectories,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022, pp. 4750–4759. [5] A. Krizhevsky, G. Hinton et al. , “Learning multiple layers of features from tiny images,” 2009. [6]D. Li, Y . Yang, Y .-Z. Song, and T. M. Hospedales, “Deeper, broader and artier domain generalization,” in Proceedings of the IEEE international conference on computer vision , 2017, pp. 5542–5550. [7]Y . Yang and S. Soatto, “Fda: Fourier domain adaptation for semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2020, pp. 4085–4095. [8]Q. Xu, R. Zhang, Y . Zhang, Y . Wang, and Q. Tian, “A fourier-based framework for domain generalization,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2021, pp. 14 383–14 392. [9]J.-H. Kim, J. Kim, S. J. Oh, S. Yun, H. Song, J. Jeong, J.-W. Ha, and H. O. Song, “Dataset condensation via efficient synthetic-data parameterization,” in International Conference on Machine Learning . PMLR, 2022, pp. 11 102–11 118. [10] G. Zhao, G. Li, Y . Qin, and Y . Yu, “Improved distribution matching for dataset condensation,” in Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2023, pp. 7856–7865. [11] Z. Guo, K. Wang, G. Cazenavette, H. LI, K. Zhang, and Y . You, “Towards lossless dataset distillation via difficulty-aligned trajectory matching,” in The Twelfth International Conference
https://arxiv.org/abs/2505.22387v1
on Learning Representations , 2024. [Online]. Available: https://openreview.net/forum?id=rTBL8OhdhH [12] G. Cazenavette, T. Wang, A. Torralba, A. A. Efros, and J.-Y . Zhu, “Generalizing dataset distillation via deep generative prior,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2023, pp. 3739–3748. [13] D. Su, J. Hou, W. Gao, Y . Tian, and B. Tang, “Dˆ 4: Dataset distillation via disentangled diffusion model,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024, pp. 5809–5818. [14] L. Zhang, J. Zhang, B. Lei, S. Mukherjee, X. Pan, B. Zhao, C. Ding, Y . Li, and D. Xu, “Accelerating dataset distillation via model augmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2023, pp. 11 950–11 959. [15] J. Du, Y . Jiang, V . Y . Tan, J. T. Zhou, and H. Li, “Minimizing the accumulated trajectory error to improve dataset distillation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2023, pp. 3749–3758. [16] Y . Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V . Lempitsky, “Domain-adversarial training of neural networks,” Journal of machine learning research , vol. 17, no. 59, pp. 1–35, 2016. [17] M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” Advances in neural information processing systems , vol. 31, 2018. [18] Y . Zhang, T. Liu, M. Long, and M. Jordan, “Bridging theory and algorithm for domain adaptation,” in International conference on machine learning . PMLR, 2019, pp. 7404–7413. [19] J. Zhu, H. Bai, and L. Wang, “Patch-mix transformer for unsupervised domain adaptation: A game perspective,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2023, pp. 3561–3571. [20] Y . Li, M. Gong, X. Tian, T. Liu, and D. Tao, “Domain generalization via conditional invariant representa- tions,” in Proceedings of the AAAI conference on artificial intelligence , vol. 32, no. 1, 2018. 10 [21] K. Zhou, Y . Yang, Y . Qiao, and T. Xiang, “Domain generalization with mixstyle,” arXiv preprint arXiv:2104.02008 , 2021. [22] J. Cha, S. Chun, K. Lee, H.-C. Cho, S. Park, Y . Lee, and S. Park, “Swad: Domain generalization by seeking flat minima,” Advances in Neural Information Processing Systems , vol. 34, pp. 22 405–22 418, 2021. [23] X. Yao, Y . Bai, X. Zhang, Y . Zhang, Q. Sun, R. Chen, R. Li, and B. Yu, “Pcl: Proxy-based contrastive learning for domain generalization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022, pp. 7097–7107. [24] J. Choi, H. S. Seong, S. Park, and J.-P. Heo, “Tcx: Texture and channel swappings for domain generaliza- tion,” Pattern Recognition Letters , vol. 175, pp. 74–80, 2023. [25] Y . Le and X. Yang, “Tiny imagenet visual recognition challenge,” CS 231N , vol. 7, no. 7, p. 3, 2015. [26] C. Fang, Y . Xu, and D. N. Rockmore, “Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias,” in Proceedings of the IEEE
https://arxiv.org/abs/2505.22387v1
International Conference on Computer Vision , 2013, pp. 1657–1664. [27] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsu- pervised domain adaptation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2017, pp. 5018–5027. [28] S. Gidaris and N. Komodakis, “Dynamic few-shot visual learning without forgetting,” in Proceedings of the IEEE conference on computer vision and pattern recognition , 2018, pp. 4367–4375. [29] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in Proceedings of the IEEE/CVF international conference on computer vision , 2019, pp. 1406–1415. 11 A Details regarding ΘandΘ′ We clarify the roles of two sets of model parameters in our framework: ΘandΘ′. Both parameter sets correspond to models with the same architecture but serve different purposes and operate on different types of data batches. The need for Θ′arises from the fact that domain-aware loss must be computed over batches grouped by pseudo-domain labels (e.g., derived via FFT-based clustering), which differs from the class-wise batches typically used in condensation methods. Moreover, in methods such as DC and MTT, where Θis either actively updated or pretrained for a specific matching loss, reusing the same parameter set for domain-aware supervision is unsuitable. Thus, Θ′is introduced to decouple domain-specific learning from class-based learning during the condensation process. •DC [ 2]:Θis randomly initialized and updated through bi-level optimization using class-wise batches. Because Θis trained throughout the condensation process, a separate parameter set Θ′is introduced and trained on domain-grouped batches to compute the domain-aware loss independently. •DM [ 3]:Θis randomly initialized but remains fixed throughout the condensation process. Since the parameters are not updated, the same Θcan be reused to compute the domain- aware loss, and an explicit Θ′is not required, even though domain-grouped batches are still used for the loss computation. •MTT [ 4]:Θis pretrained on real data and used to guide condensation via stored training trajectories. To preserve this role, a separate parameter set Θ′is trained independently using pseudo-domain labels on real data prior to condensation, in a manner similar to the pretraining of Θ. Across all methods incorporating domain-aware learning, batches used with Θ′are consistently organized by pseudo-domain labels. Whether a distinct Θ′is needed depends on whether Θis trained, before or during the condensation process. B Initializing synthetic data with real images Table A: Performance comparison on CIFAR-10 and PACS datasets under 1 and 10 Image Per Class (IPC) settings. The experiment was done with real initializing the synthetic data following prior methods. Dataset CIFAR-10 PACS Img/Cls 1 10 1 10 Ratio (%) 0.02 0.2 0.08 0.8 DC 28.2 ±0.6 44.7±0.5 35.9±1.1 47.7±1.1 DC + DAM 29.0±0.4 45.2±0.3 37.8±0.7 48.7±0.2 DM 25.7 ±0.6 49.1±0.2 32.0±1.9 50.0±0.9 DM + DAM 26.8±0.3 50.0±0.3 32.7±1.2 50.9±0.6 MTT 45.4 ±0.2 65.3±0.4 44.3±1.8 51.4±1.2 MTT + DAM 45.6±0.3 65.5±0.2 44.4±1.6 53.5±1.3 In the main manuscript, all experiments initialize synthetic data using Gaussian noise , which better aligns with the privacy-preserving goals of dataset condensation. However, to demonstrate that our proposed method works even under alternative initializations, we conduct
https://arxiv.org/abs/2505.22387v1
additional experiments where synthetic data is initialized with real images, selecting a random image from the corresponding class in the real dataset, following prior works. These results, shown in Table A, represent averages over 10 runs, consistent with our main evaluation protocol. While all methods benefit from real initialization, as expected due to the additional structure provided at the start, the performance gains from DAM persist, underscoring its robustness. Notably, the relative improvement from DAM remains more pronounced in multi-domain settings like PACS, where domain shift presents a bigger challenge. In contrast, single-domain datasets such as CIFAR-10 exhibit smaller domain-induced variability, which partially reduces the benefits of DAM when real images are used as initialization. 12 C Hyperparameter sweep C.1 Domain mask initialization z m Table B: Effect of varying the domain mask initialization on CIFAR-10 and PACS datasets under IPC 1 and IPC 10. All results are the average of 10 runs and reported as mean ±standard deviation. The gray background setting is the setting equal to the results in the main manuscript. The highest is bolded and the second highest is underlined . Dataset ( →) CIFAR-10 PACS Img/Cls ( →) 1 10 1 10 Ratio (%) ( →) 0.02 0.2 0.08 0.8 Method ( ↓) Initial Value ( ↓) DC + DAM0.1 28.5 ±0.3 45.3±0.4 38.5±0.9 48.8±0.7 0.05 28.8 ±0.3 45.2±0.4 38.8±0.9 47.9±0.7 0.01 29.0±0.5 45.4±0.3 38.8±0.7 48.3±0.5 0.005 28.7 ±0.5 45.2±0.2 38.6±0.4 49.0±0.7 0.001 28.7 ±0.6 45.1±0.4 39.4±0.6 47.9±0.6 DM + DAM0.1 27.2±0.3 49.7±0.3 34.2±1.6 50.1±0.5 0.05 26.6 ±0.3 49.9±0.4 34.2±0.5 49.7±0.6 0.01 27.1±0.3 49.8±0.5 34.7±0.8 50.9±0.4 0.005 26.3 ±0.3 48.9±0.2 34.2±1.0 49.2±0.7 0.001 26.2 ±0.4 48.3±0.2 33.8±1.2 50.8±0.4 MTT + DAM0.1 46.4 ±0.4 - 43.0 ±1.2 - 0.05 41.1 ±0.6 - 43.1 ±1.0 - 0.01 46.8±0.4 57.9±0.4 46.6±1.3 50.6±0.6 0.005 41.2 ±0.8 - 43.3 ±1.2 - 0.001 41.9 ±0.5 - 40.7 ±1.2 - The domain mask initialization value ( zm) controls the initial scale of the softmax-normalized spatial masks applied in the Domain-Aware Module (DAM). A smaller zmleads to nearly uniform domain weights at the beginning of training, allowing all domain masks to contribute equally. In contrast, a larger zmproduces more confident, peaked softmax outputs early on, encouraging the model to assign higher importance to specific domains from the start. To understand the effect of this initialization, we conduct a sweep across a range of zmvalues and showcase the result in Table B. We find that the setting used in the main manuscript ( zm= 0.01, gray-highlighted) consistently results in first- or second-best performance across all methods and datasets. Crucially, even under different initializations, methods with DAM uniformly outperform their respective baselines without DAM, indicating strong robustness. While MTT + DAM shows slightly more variation across zm values compared to other methods, it still maintains a clear performance margin over MTT without DAM. Due to observed instability at 1 IPC, we omit MTT + DAM results for IPC 10 in this ablation. Overall, these results confirm that zm= 0.01is a reliable and effective choice, and that DAM consistently enhances performance across settings. C.2 Domain embedding weight λ
https://arxiv.org/abs/2505.22387v1
We study the effect of varying the domain embedding weight λ, which balances the class loss and domain-aware loss in DAM. A smaller λreduces the influence of domain-specific learning, while a larger value encourages the model to attend more strongly to domain variations during condensation. As shown in Table C, performance remains strong across a wide range of λvalues, showing that the method is not overly sensitive to this hyperparameter. The setting used in the main manuscript (λ= 0.1, gray-highlighted) consistently achieves the best or second-best performance across datasets and methods. This confirms that λ= 0.1is a reliable default, and that DAM provides robust improvements without requiring precise tuning. We do not explore values of λgreater than 0.1, as assigning excessive weight to domain supervision risks overshadowing class-discriminative learning. As emphasized in the main manuscript, DAM is designed to enrich class information with domain cues, not to compete with it. 13 Table C: Effect of varying the embedding weight on CIFAR-10 and PACS datasets under IPC 1 and IPC 10. All results are the average of 10 runs and reported as mean ±standard deviation. The gray background setting is the setting equal to the results in the main manuscript. The highest is bolded and the second highest is underlined Dataset ( →) CIFAR-10 PACS Img/Cls ( →) 1 10 1 10 Ratio (%) ( →) 0.02 0.2 0.08 0.8 Method ( ↓) λ(↓) DC + DAM0.1 29.0±0.5 45.4±0.3 38.8±0.7 48.3±0.5 0.05 29.0±0.2 44.9±0.3 38.9±0.7 47.7±0.3 0.01 28.9 ±0.6 45.6±0.2 38.8±0.4 48.0±0.5 0.005 28.8 ±0.4 45.0±0.2 38.3±0.5 47.7±0.5 0.001 28.9 ±0.5 45.7±0.3 38.6±0.4 48.0±0.4 DM + DAM0.1 27.1±0.3 49.8±0.5 34.7±0.8 50.9±0.4 0.05 27.2±0.5 48.9±0.4 34.3±1.3 49.3±0.4 0.01 26.5 ±0.2 49.0±0.4 34.3±0.6 50.4±0.7 0.005 26.7 ±0.6 48.9±0.3 34.3±0.7 50.0±0.6 0.001 26.8 ±0.4 49.5±0.4 33.3±0.7 50.5±0.8 MTT + DAM0.1 46.1 ±1.3 - 43.3 ±0.8 - 0.05 46.5 ±0.8 - 43.9 ±0.5 - 0.01 46.8±0.4 57.9±0.4 46.6±0.9 50.6±0.6 0.005 46.0 ±0.5 - 46.0 ±0.7 - 0.001 46.7 ±0.8 - 46.5 ±1.0 - As with the previous sweep, we omit MTT + DAM results for IPC 10 due to instability observed under 1 IPC setting. C.3 Temperature τ Table D: Effect of varying the temperature τon CIFAR-10 and PACS datasets under IPC 1 and IPC 10. All results are the average of 10 runs and reported as mean ±standard deviation. The gray background setting is the setting equal to the results in the main manuscript. The highest is bolded and the second highest is underlined Dataset ( →) CIFAR-10 PACS Img/Cls ( →) 1 10 1 10 Ratio (%) ( →) 0.02 0.2 0.08 0.8 Method ( ↓) τ(↓) DC + DAM0.1 29.0±0.5 45.4±0.3 38.8±0.7 48.3±0.5 1 28.5 ±0.4 45.3±0.3 38.8±0.7 48.7±0.5 5 28.7 ±0.4 45.1±0.3 38.8±0.5 48.7±0.5 DM + DAM0.1 27.1±0.3 49.8±0.5 34.7±1.1 50.9±0.4 1 26.9 ±0.5 49.1±0.3 33.1±1.0 49.8±0.5 5 26.8 ±0.4 49.1±0.4 33.3±1.0 50.4±0.5 MTT + DAM0.1 46.8±0.4 57.9±0.4 46.6±0.9 50.6±0.6 1 42.4 ±0.6 - 46.9 ±0.4 - 5 42.0 ±0.7 - 48.5±0.8 - We ablate the softmax temperature τin DAM, which controls the sharpness of domain assignment. A lower τ(e.g., 0.1) enforces peaked
https://arxiv.org/abs/2505.22387v1
domain masks, while higher values (e.g., 1 or 5) blend domain cues more evenly. As demonstrated in Table D, experiments with τ= 1andτ= 5show similar results, whereas the more discriminative setting τ= 0.1yields a vivid improvement in most configurations. As with the 14 previous sweep, we omit MTT + DAM results for IPC 10 due to instability observed under 1 IPC setting. D Additional dataset for multi-domain setting Table E: Comparison of DomainNet under IPC 1. All results are the average of 3 runs and reported as mean ±standard deviation. Dataset DomainNet [29] Img/Cls 1 Ratio (%) 0.06 DM 3.44 ±0.03 DM + DAM 3.52±0.03 To evaluate the scalability of our approach on a larger multi-domain dataset, we conduct experiments on DomainNet [ 29], a benchmark dataset comprising 345 classes across six distinct domains: Clipart, Infograph, Painting, Quickdraw, Real, and Sketch. The total dataset contains approximately 586,575 images, with the number of samples per domain ranging from 48,837 to 175,327, making it one of the largest and most diverse domain generalization datasets. Due to the high computational demand of such a large-scale dataset, we perform the evaluation under the 1 Image Per Class (IPC) setting, which corresponds to a 0.06% data ratio. The results are reported in Table E. While the overall performance is lower, owing to the dataset’s complexity and extreme data compression, the incorporation of DAM still provides a measurable improvement over the baseline DM method, further demonstrating the robustness and scalability of our proposed approach. E Computational cost We report the GPU memory usage and per-iteration training time for with and without our method, DAM. All experiments were conducted using an NVIDIA RTX A6000 GPU and an In- tel Xeon Gold 6442Y CPU. The reported training time is the average duration of a single train- ing loop measured over 10 iterations, taken after 10 warm-up iterations. Peak GPU memory consumption is measured during the same window using PyTorch’s memory profiling utilities (torch.cuda.max_memory_allocated() ). As shown in Table F and Table G, and also noted in the limitation, incorporating DAM introduces an overhead. As we introduce the domain masks per image, GPU memory usage increases with images per class (IPC) and the number of dataset classes. However, the training time doubles only in the low IPC and does not linearly grow with the IPC and the number of dataset classes. For MTT, we observed a slightly different behavior. GPU memory usage and training time were unstable across repeated runs, with noticeable fluctuations. We attribute this instability to the overhead of loading and processing trajectory data within each training loop, which is unique to the MTT framework. Due to this inconsistency, we report the highest observed GPU memory usage and training time across three repeated runs for each setting. F Hyperparameter for MTT For DC and DM, we adopted the hyperparameters used in their respective original implementations. For multi-domain datasets, we followed the same configuration as used for Tiny ImageNet. In contrast, MTT required a separate hyperparameter search due to frequent occurrences of NaNlosses during training when combined with DAM and
https://arxiv.org/abs/2505.22387v1
Gaussian noise initialization. Table H lists the hyperparameters used for MTT with DAM across all datasets. We initially started with the settings reported in the original MTT paper [ 4], and conducted minimal adjustments only when instability (e.g., NaNgradients or diverging loss) was observed. We constrained the search to a narrow range around the original values, preferring stability over aggressive tuning. It is important to note that these are not hyperparameters introduced by our method (DAM) but rather those that existed from the MTT pipeline. 15 Table F: Results with and without DAM on the prior methods on the single-domain setting. “T.Image.” denotes Tiny ImageNet dataset. The results are shown as peak GPU consumption - average training loop time. Dataset CIFAR-10 CIFAR-100 T.Image. Img/Cls 1 10 50 1 10 50 1 DC 1GiB - 0.2s 1GiB - 11.1s 2GiB - 60.6s 2GiB - 1.8s 5GiB - 105s - - DC + DAM 1GiB - 0.4s 2GiB - 13.4s 9GiB - 90.9s 2GiB - 2.4s 18GiB - 119s - - DM 0.1GiB - 0.1s 1GiB - 0.1s 1GiB - 0.1s 1GiB - 0.7s 2GiB - 0.8s 8GiB - 0.8s 6GiB - 3.5s DM + DAM 0.1GiB - 0.1s 1GiB - 0.2s 4GiB - 0.3s 1GiB - 1.2s 7GiB - 1.4s 36GiB - 2.5s 10GiB - 5.4s MTT 1GiB - 2.2s 5GiB - 1.3s - 5GiB - 2.4s - - MTT + DAM 1GiB - 3.3s 9GiB - 4.7s - 9GiB - 4.5s - - Table G: Results with and without DAM on the prior methods on the multi-domain setting. The results are shown as peak GPU consumption - average training loop time. Dataset PACS VLCS Office-Home Img/Cls 1 10 1 10 1 10 DC 3GiB - 0.4s 4GiB - 26.6s 3GiB - 0.3s 3GiB - 18.9s 4GiB - 2.9s - DC + DAM 3GiB - 0.7s 5GiB - 31.4s 3GiB - 0.5s 4GiB - 22.9s 5GiB - 3.5s - DM 1GiB - 0.1s 1GiB - 0.2s 1GiB - 0.1s 1GiB - 0.1s 2GiB - 1s 5GiB - 1.1s DM + DAM 1GiB - 0.2s 2GiB - 0.2s 1GiB - 0.2s 2GiB - 0.3s 2GiB - 1.2s 10GiB - 1.9s MTT 1GiB - 2.5s 13GiB - 2.3s 1GiB - 2.8s - 12GiB - 3.0s - MTT + DAM 3GiB - 4.0s 25GiB - 5.1s 2GiB - 4.1s - 24GiB - 9.5s - Table H: Hyperparameters for DAM with MTT. “T.Image.” denotes Tiny ImageNet dataset and “OH” denote Office Home dataset. Dataset IPC Synthetic StepsExper EpochsMax Start EpochsLearning Rate ImageLearning RateStarting Synthetic Step Size CIFAR-101 50 2 2 100 10−710−2 10 30 2 20 10510−610−2 CIFAR-1001 20 3 20 10310−510−2 10 20 2 20 10310−510−2 T.Image. 1 10 2 10 10410−410−2 PACS1 10 2 10 10410−510−2 10 20 2 40 10410−610−2 VLCS1 10 2 10 10410−610−2 10 20 2 40 10410−610−2 OH 1 10 2 10 10410−410−2 G Qualitative results We provide additional qualitative examples of the condensed synthetic images generated with DAM in Figure 5, 6, 7, and 8. All visualizations are obtained under IPC 10 using the CIFAR-10 and PACS datasets. We present
https://arxiv.org/abs/2505.22387v1
results based on DC and DM baselines, and visualize the synthetic images and domain mask after the final condensation step. 16 Figure 5: Visualization of the final output and domain masks in CIFAR-10 under 10 IPC setting. The shown images are condensed with DC+DAM. 17 Figure 6: Visualization of the final output and domain masks in PACS under 10 IPC setting. The shown images are condensed with DC+DAM. 18 Figure 7: Visualization of the final output and domain masks in CIFAR-10 under 10 IPC setting. The shown images are condensed with DM+DAM. 19 Figure 8: Visualization of the final output and domain masks in PACS under 10 IPC setting. The shown images are condensed with DM+DAM. 20
https://arxiv.org/abs/2505.22387v1
arXiv:2505.22389v1 [cs.LG] 28 May 2025Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning Haomiao Qiu1,2, Miao Zhang1∗, Ziyue Qiao2, Weili Guan1, Min Zhang1, Liqiang Nie1 1Harbin Institute of Technology (Shenzhen) 2Great Bay University 24B951058@stu.hit.edu.cn, zhangmiao@hit.edu.cn, zyqiao@gbu.edu.cn , nieliqiang@gmail.com Abstract Continual Learning (CL) aims to enable models to continuously acquire new knowl- edge from a sequence of tasks with avoiding the forgetting of learned information. However, existing CL methods only rely on the parameters of the most recent task for inference, which makes them susceptible to catastrophic forgetting. Inspired by the recent success of model merging techniques, we propose Perturb-and-Merge (P&M) , a novel continual learning framework that integrates model merging into the CL paradigm to mitigate forgetting. Specifically, after training on each task, P&M constructs a new model by forming a convex combination of the previous model and the newly trained task-specific model. Through theoretical analysis, we minimize the total loss increase across all tasks and derive an analytical solu- tion for the optimal merging coefficient. To further improve the performance of the merged model, we observe that the degradation introduced during merging can be alleviated by a regularization term composed of the task vector and the Hessian matrix of the loss function. Interestingly, we show that this term can be efficiently approximated using second-order symmetric finite differences, and a stochastic perturbation strategy along the task vector direction is accordingly devised which incurs no additional forward or backward passes while providing an effective approximation of the regularization term. Finally, we combine P&M with LoRA, a parameter-efficient fine-tuning method, to reduce memory overhead. Our proposed approach achieves state-of-the-art performance on several continual learning benchmark datasets. 1 Introduction Continual Learning (CL) aims to enable models to continuously acquire new knowledge from a sequence of tasks while retaining previously learned information [ 1]. In many real-world applications, data arrives in a streaming fashion, and due to constraints such as privacy, storage, or computational resources, models are typically unable to retain or revisit earlier task data. This setting poses a fundamental challenge: how can a model continually adapt to new tasks while maintaining good performance on previous ones? Although considerable progress has been made through methods such as parameter isolation [ 2,3,4], regularization [ 5,6,7,8], and experience replay [ 9,10,11,12], catastrophic forgetting remains a core issue in CL. In parallel, the rise of large-scale pretrained models has spurred interest in model merging as a simple yet effective strategy for post-training enhancement. Prior studies have shown that independently trained models, initialized from the same pretrained weights and optimized on different tasks, can ∗Corresponding Author Preprint. be effectively merged—through parameter interpolation or more structured methods—into a single unified model that performs well across tasks [ 13,14,15,16,17,18,19,20]. Model merging offers a practical mechanism for consolidating task-specific models without requiring shared training data. Although CL and model merging differ significantly in their procedures, they share a common goal: to learn a single model that performs well across multiple tasks. The key distinction lies in the assumptions and timing of parameter updates: CL follows a sequential paradigm, where
https://arxiv.org/abs/2505.22389v1
only current-task data is accessible and previous task parameters are reused for initialization; whereas model merging assumes that tasks are trained independently from the same pretrained model, without any data sharing. Both paradigms operate under task-isolated settings but differ in how and when task integration occurs. A major advantage of CL is that all tasks are trained along a shared optimization trajectory, which increases the likelihood that the resulting parameters reside near a joint optimum [ 21,22]. In contrast, model merging offers a stable post-training integration mechanism, particularly attractive in scenarios where previous data cannot be revisited. Notably, most CL methods use the model parameters obtained after task tto perform inference on all tasks 1tot, even though these parameters are mainly optimized for the current task [23, 21, 24]. Motivated by this observation, we propose a novel method, Perturb-and-Merge (P&M) , which unifies the training dynamics of CL with the inference principle of model merging. Infer after Merging. Specifically, for task t, we initialize training with the inference parameters ˆθt−1from previous tasks, and after training, obtain the task-specific optimum θ∗ t. We then merge the two using the convex combination: ˆθt= (1−α)ˆθt−1+αθ∗ t.This process can be viewed as scaling the task vector ∆θ∗ t=θ∗ t−ˆθt−1to reduce interference with previously learned knowledge. To determine the optimal merging coefficient α, we analyze its influence on the total loss across all tasks and derive a closed-form solution that minimizes overall performance degradation. This solution depends on the Hessian matrix at the optimum of each task, which we approximate using the empirical Fisher information matrix. Train with Perturbation. Furthermore, because model merging can introduce parameter con- flicts—causing the merged model to underperform compared to task-specific models—we find that this degradation can be mitigated by introducing a regularization loss term composed of the task vector and the Hessian matrix during training. However, computing the Hessian matrix on a per-batch basis is computationally expensive. Interestingly, we show that this regularization term can be efficiently approximated using second-order symmetric finite differences. More importantly, injecting task-vector-aligned parameter perturbations during training provides a stochastic approximation of the regularizer, requiring no additional forward or backward passes. Our experiments demonstrate that such perturbations can effectively reduce parameter conflicts during model merging and enhance the performance of the merged model. Finally, we combine P&M with LoRA (Low-Rank Adaptation) [ 25], a parameter-efficient fine-tuning strategy, to reduce memory overhead for model storage. Our overall approach achieves state-of-the-art performance on multiple CL benchmark datasets. 2 Related Work 2.1 Continual Learning CL aims to enable models to learn new tasks sequentially without forgetting previously acquired knowledge. Regularization-based methods preserve important parameters by applying constraints to prevent forgetting [ 5,6,7,8], while memory-based approaches use external buffers to store historical data for rehearsal or sampling [ 9,10,11,12]. Architecture-based methods dynamically expand model capacity to accommodate new tasks [ 2,3,4]. Another direction focuses on constraining gradient directions to reduce task interference. Examples include Orthogonal Weight Modulation (OWM)[ 26], Orthogonal Gradient Descent (OGD)[ 27], and Gradient Projection Memory (GPM) [ 28], which project gradients into task-specific subspaces to retain prior knowledge.
https://arxiv.org/abs/2505.22389v1
With the rise of large-scale pre-trained models [ 29,30,31], continual fine-tuning has become increasingly popular. However, full fine-tuning is computationally expensive, and more parameter- efficient tuning strategies—such as prompt-based learning [ 32,33,34], LoRA, and modular tuning 2 methods [ 35]—have been proposed to improve the scalability and practicality of CL. In this work, we integrate our method with LoRA to reduce both training and storage costs. 2.2 Model Merging Model merging has gained traction in both multi-task and CL scenarios. Generally, it can be categorized into two directions: one line of work merges models fine-tuned on the same task to improve generalization [ 36,37,38,39]; the other merges models trained on different tasks to form a single, unified model that can handle all constituent tasks. Model merging aims to integrate knowledge from different tasks without retraining from scratch [ 40,41,42]. Simple strategies use fixed merging weights based on the number of tasks [ 43], while more advanced methods such as IMM [ 44] and CoMA [22] empirically tune coefficients to improve performance. Some approaches [ 45,39] go further by assigning parameter-wise merging coefficients to reflect the heterogeneous impact of different weights, but such methods often require extensive hyperparameter tuning. In contrast, our method provides a closed-form, theoretically optimal merging strategy without the need for manual tuning. 3 Method 3.1 Infer after Merging In continual learning, there are TtasksT1, ...,TTthat each task includes data: Dt={(xt i, yt i)}nt i=1, where xt i∈Rdis the input, yt i∈Ris the label, and fθ0(·)is a pretrain model with parameters θ0. When receiving the t-th task, the goal of CL is to achieve an overall optimal performance across allTtasks. Since we currently only have access to data from task t, we can only update θt by training on Dt. Given an input xwith ground-truth label yfromDtand a model prediction ˆp=softmax (fθ(x))∈RC, the cross-entropy loss is defined as: Lce(θ) =−CX c=1I(y=c) log ˆpc=−log ˆpy, (1) where ˆpyis the predicted probability for the correct class yandI(·)is the indicator function. At this point, we have the model ˆθt−1used for inference on tasks 1 to t−1, which we take as the starting point for training. After training with Eq. 1, we obtain the optimal parameters θ∗ ton the dataset Dt: θ∗ t=ˆθt−1+ ∆θ∗ t=ˆθt−1+argmin ∆θtLt(ˆθt−1+ ∆θt). (2) Traditional CL methods directly use θ∗ tas the inference parameter for the t-th task, i.e., ˆθt=θ∗ t. However, since θl∗ tis trained on Dt, it may not fully guarantee the performance on D1, ...,Dt−1. A natural idea is to combine models trained on different tasks. Specifically, ˆθt−1, which be used as the inference parameter for these tasks, performs well on tasks 1tot−1, while θ∗ t, which is optimized for task t, serves as the inference parameter for task t. Therefore, we can merge the two models. Here we use a simple convex combination: ˆθt= (1−αt)ˆθt−1+αtθ∗ t, (3) where 0≤αt≤1is the merging coefficient used to balance the weights of the two parameters, and ˆθtis used for inference on tasks 1 to t. Since θ∗ tis trained based on ˆθt−1, i.e.,θ∗ t=ˆθt−1+ ∆θ∗ t,then the Eq. 3 can
https://arxiv.org/abs/2505.22389v1
be rewritten as: ˆθt=ˆθt−1+αt∆θ∗ t, (4) which means that taking the weighted average of ˆθt−1andθ∗ tis equivalent to scaling ∆θ∗ t. This scaling strategy means that it will not harm the parameters ˆθt−1for tasks 1 to t−1, but only by adjusting the task vector for task tto reduce forgetting of old tasks. In our experiments 4.3, we found that this scaling has a slight impact on the knowledge of the new task. 3 Table 1: Training and Inference Strategies Taskt Training: θ∗ t=argmin θtLt(θt)Inference: ˆθt Continual Learning θ∗ t=θt−1+ ∆θ∗ tˆθt=θ∗ t Model Merging θ∗ t=θ0+ ∆θ∗ tˆθt=Pt i=1αiθ∗ i Ours (P &M) θ∗ t=θt−1+ ∆θ∗ tˆθt=ˆθt−1+αt∆θ∗ t Infer after Merging unifies the benefits of model merging and continual learning, as demonstrated in Tab. 1. In contrast to conventional CL, it mitigates interference with previously acquired knowledge while encouraging more compact task-specific optima through continued adaptation—ultimately promoting better generalization [ 21,46]. Unlike momentum updates or exponential moving averages (EMA) [ 47] [48] that perform step-wise smoothing, our method merges the full task vector after completing each task. Instead of relying on historical gradients or parameter trajectories, it directly combines the current task’s optimal solution with the previous model, better preserving task semantics and overall structure. 3.2 A Closed-form Solution for Optimal Merge Coefficient Next, we aim to obtain an optimal αt. We are concerned with the performance degradation of the merged model ˆθtcompared to each task-specific optimal model θ∗ i, where i≤t. For task i, we define the performance drop δias: δi=Li(ˆθt)− Li(θ∗ i). (5) δievaluates the impact of the merged model ˆθton the loss of task icompared to its optimal parameter θ∗ i. We expand it into Taylor series, then we have δi=∇Li(θ∗ i)(ˆθt−θ∗ i) +1 2(ˆθt−θ∗ i)⊤Hi(θ∗ i)(ˆθt−θ∗ i) +O(∥ˆθt−θ∗ i∥3), (6) where Hi(θ∗ i)represents the corresponding Hessian matrix. We can consider that for task i,θ∗ tis the optimal parameter for task i, so∇Li(θ∗ i)tends to 0. And We can assume that ∥ˆθt−θ∗ i∥3tends to 0, so we only need to consider the second-order term1 2(ˆθt−θ∗ i)⊤Hi(θ∗ i)(ˆθt−θ∗ i). Then we consider the change in the loss for all tasks, i.e., α∗ t=argmin αttX i=1δi≈argmin αttX i=11 2(ˆθt−θ∗ i)⊤H(θ∗ i)(ˆθt−θ∗ i), (7) which has the closed-form solution: α∗ t=−Pt i=1 ˆθt−1−θ∗ i⊤ Hi(θ∗ i)∆θ∗ t Pt i=1∆θ∗ tHi(θ∗ i)∆θ∗ t, (8) Please refer to the Appendix A.1 for the detailed derivation. To avoid the prohibitive cost of computing the full Hessian, we approximate it with the diagonal of the empirical Fisher information matrix [49, 50]. Given a model with parameters θ, the empirical Fisher matrix is defined as: F(θ) =E(x,y)∼D ∇θlogpθ(y|x)∇θlogpθ(y|x)⊤ . (9) In practice, we approximate F(θ)using the average over the whole training dataset: ˆF(θ) =1 NNX i=1∇θlogpθ(yi|xi)∇θlogpθ(yi|xi)⊤, (10) and use only the diagonal entries of ˆF(θ). After completing the training of task t, we first compute the diagonal empirical Fisher matrix ˆFt(θ∗ t)≈Ht(θ∗ t), then obtain α∗ tusing Eq. 8, and finally compute the merged model ˆθt=ˆθt−1+α∗ t∆θ∗ tas the inference parameters for tasks 1tot. Note that for each previous task i < t , the corresponding Fisher
https://arxiv.org/abs/2505.22389v1
matrix ˆFi(θ∗ i)is computed and stored at the time of training task i. 4 3.3 Train with Perturbation Further, we can also reducePt i=1δi(α∗ t)to make the merged model optimal on all ttasks by optimizing θ∗ t. Next, we abbreviate Hi(θ∗ i)asHi, then we have tX i=1δi(α∗ t) =1 2tX i=1 ˆθt−1−θ∗ i⊤ Hi ˆθt−1−θ∗ i −Pt i=1 ˆθt−1−θ∗ i⊤ Hi∆θ∗ t2 2Pt i=1∆θ∗⊤ tHi∆θ∗ t(11) ≤1 2tX i=1 ˆθt−1−θ∗ i⊤ Hi ˆθt−1−θ∗ i (12) For the details, please refer to the Appendix A.2. To enhance the performance of the merged model, we aim to minimize the upper bound ofPt i=1δi(α∗ t). Among all the terms in this ex- pression, only the last term δt(α∗ t)involving θ∗ t—specifically, ˆθt−1−θ∗ t⊤ Ht ˆθt−1−θ∗ t = ∆θ∗⊤ tHt∆θ∗ t—depends on the current task tand can be optimized during training. During training, this term can be added as a regularizer to reduce δt(α∗ t), but it requires real-time Hessian computation. While the Fisher Information Matrix is commonly used as a surrogate, it only approximates the Hessian near the optimum under certain conditions (e.g., the number of data samples N−→ ∞ ). Since training parameters are far from optimal and the Fisher can only be estimated from a single batch during training, it fails to capture the true curvature and is thus unreliable in this setting. Fortunately, we can approximate the quadratic form using symmetric finite differences as follows: ∆θ⊤ tHt∆θt≈1 ϵ2(Lce t(θt+ϵ∆θt) +Lce t(θt−ϵ∆θt)−2Lce t(θt)), (13) where ϵ≤1is a small constant, ∆θtdenotes the task vector during training and θt=ˆθt−1+∆θt. This approximation is derived via a second-order Taylor expansion of the loss around θ. We incorporate the right-hand side of Eq. 13 as a regularization term during training, and define the total training loss as: Lt(θt) =Lce t(θt) +λLreg t(θt) = 1−2λ ϵ2 Lce t(θt) +λ ϵ2(Lce t(θt+ϵ∆θt) +Lce t(θt−ϵ∆θt)), (14) where λcontrol the strength of the regularization term. Eq. 14 requires two additional forward passes of the cross-entropy loss with parameter perturbations along the task vector direction in each training step. While this avoids computing the full Hessian, it results in a threefold increase in memory and computation cost. To improve efficiency, we propose a stochastic approximation. We observe that Eq. 14 is effectively equivalent to applying a random perturbation to the model parameters during training, where the perturbation direction aligns with the task vector of task t. Specifically, the loss is evaluated under one of three perturbations: ϵ∆θt,−ϵ∆θt, or0. Specifically, at each training step, we sample one of the three versions of the loss: ˜Lt(θt) =  Lce t(θt) with probability p0, Lce t(θt+ϵ∆θ∗ t)with probability p+, Lce t(θt−ϵ∆θ∗ t)with probability p−.(15) We define the sampling probabilities for the three perturbation terms as follows: the original point is sampled with probability p0= 1−2λ ϵ2, and the two perturbed points are each sampled with probability p+=p−=λ ϵ2. In the experiments, we treat p0as a hyperparameter, and set p+=p−=1 2(1−p0). These probabilities are designed to ensure that the expected loss remains consistent with the original definition, resulting in an unbiased estimate of the total loss: E[˜Lt(θ)]
https://arxiv.org/abs/2505.22389v1
=p0Lce t(θ) +p+Lce t(θ+ϵ∆θ∗ t) +p−Lce t(θ−ϵ∆θ∗ t) =Lt(θ). (16) This sampling strategy reduces the forward cost per batch from 3 ×to 1×, without introducing bias into the gradient estimation. While variance may increase slightly, this technique enables us to scale P&M to large models and datasets efficiently. 5 Algorithm 1 LoRA-P&M 1:Input: Datasets Dt={(xt i, yt i)}nt i=1, forTtasksT1, ...,TT, a pre-trained model fθ(·)with parameters θ0,ˆθ0=θ0, hyper-parameters ϵ,p0andp+=p−=1 2(1−p0). 2:Output: ˆθTfor all tasks. 3:fort= 1toTdo 4: Fixˆθt−1and initialize a new task-specific LoRA set for task t: LoRA t 5: repeat 6: foreach example (xt i, yt i)∈ Dtdo 7: ▷Train with Perturbation: 8: Sample perturbation ˜ϵ∈ {− ϵ,0,+ϵ}according to probabilities {p−, p0, p+} 9: Compute loss Lce t(ˆθt−1+ (1 + ˜ ϵ)LoRA t;xt i, yt i) 10: Update LoRA tvia AdamW 11: end for 12: until convergence 13: ▷Infer after Merging: 14: θ∗ t=ˆθt−1+LoRA t 15: Estimate the diagonal empirical Fisher Information Matrix ˆFt(θ∗ t)on task Ttusing Eq. 10 16: ˆθt=ˆθt−1+α∗ tLoRA twhere α∗ tis from Eq. 8 17:end for 3.4 LoRA-P&M Note that computing Eq. 8 requires storing the optimal parameters θ∗ iof all previous tasks. As the number of tasks grows, this results in increased memory demands. To address this issue, we integrate LoRA [25], a low-rank parameter-efficient fine-tuning method, to reduce storage overhead. For task t, the linear layer becomes Wt=Wt−1+AtBt, where only AtandBtare updated. LoRA reduces trainable parameters by decomposing the weight update into low-rank matrices: ∆W=AB. In CL, new LoRA module will be added for new task t, while other modules are kept fixed and participate jointly in the forward pass [23, 51]. LetPLoRA={(A(p),B(p))|p∈ I} denote the set of LoRA modules inserted at various locations p in the network, where Iindexes the parameter subsets affected. The overall model parameters θtcan then be written as: θt=ˆθt−1+X p∈IA(p) tB(p) t, (17) In our experiments, we apply LoRA modules to the key and value projections, so that only the Fisher Information Matrices of these parameters need to be stored. The experiments comparing the memory and time overhead of our method with LoRA are provided in the Appendix B. The overall procedure of the algorithm is presented in Algorithm 1. 4 Experiments In this section, we first present the experimental setups, and then compare P&M with state-of-the-art CL methods and model merging methods across multiple benchmarks. 4.1 Experimental Setups Evaluation Benchmarks and Metrics. Following the evaluation protocols in [ 35,23], we assess LoRA-P&M on five standard CL benchmarks: ImageNet-R [ 52], ImageNet-A [ 53], DomainNet [ 54], CIFAR100 [ 55], and CUB200 [ 56]. As in prior work [ 23,21], we split ImageNet-R into 5, 10, and 20 tasks; ImageNet-A into 10 tasks; DomainNet into 5 tasks; and both CIFAR100 and CUB200 into 6 Table 2: Performance comparison with CL methods on ImageNet-R across different task lengths. MethodImageNet-R 5 tasks ImageNet-R 10 tasks ImageNet-R 20 tasks Acc↑ AAA↑ Acc↑ AAA↑ Acc↑ AAA↑ Full Fine-Tuning 64.92±0.87 75.57 ±0.50 60.57±1.06 72.31 ±1.09 49.95±1.31 65.32 ±0.84 L2P [33] 73.04±0.71 76.94 ±0.41 71.26±0.44 76.13 ±0.46 68.97±0.51 74.16 ±0.32 DualPrompt [58] 69.99±0.57 72.24 ±0.41 68.22±0.20
https://arxiv.org/abs/2505.22389v1
73.81 ±0.39 65.23±0.45 71.30 ±0.16 CODA-Prompt [32] 76.63±0.27 80.30 ±0.28 74.05±0.41 78.14 ±0.39 69.38±0.33 73.95 ±0.63 HiDe-Prompt [62] 74.77±0.25 78.15 ±0.24 74.65±0.14 78.46 ±0.18 73.59±0.19 77.93 ±0.19 InfLoRA [23] 76.95±0.23 81.81 ±0.14 74.75±0.64 80.67 ±0.55 69.89±0.56 76.68 ±0.57 SD-LoRA [21] 79.15±0.20 83.01 ±0.42 77.34±0.35 82.04 ±0.24 75.26±0.37 80.22 ±0.72 LoRA 71.22±1.47 78.15 ±1.08 65.72±0.75 76.14 ±0.96 56.35±0.80 71.08 ±1.04 LoRA-P&M 81.47±0.56 85.96 ±0.52 79.95±0.18 85.29 ±0.93 76.37±0.09 82.77 ±0.71 Table 3: Performance comparison with model merging methods on 5 datasets. MethodINR-10 INR-20 INA-10 DN*-5 C100-10 CUB-10 Acc↑ Acc↑ Acc↑ Acc↑ Acc↑ Acc↑ LoRA 65.72 56.35 44.41 71.81 72.58 64.82 w/ Model Averaging 76.90 74.64 54.54 81.84 87.52 74.87 w/ DARE [39] 75.09 66.03 55.87 80.58 87.28 76.57 w/ CoMA [63] 79.34 75.60 53.24 83.98 86.95 74.65 w/ CoFIMA [63] 79.06 75.09 54.09 83.85 86.58 74.43 w/P&M 79.95 76.37 56.57 84.71 88.45 78.29 10 tasks. Specifically, DomainNet refers to the full version with all 345 classes, while DN* denotes a variant where we select the top 200 most populous classes, following [57, 54]. Following [ 21], we report two widely used CL metrics: average accuracy ( Acc) and average anytime accuracy ( AAA ).Acc computes the mean accuracy over all Ntasks after all tasks is completed. AAA further captures learning dynamics by averaging accuracy on all seen tasks after training on each new task. Competing Methods and Implementation Details. We compare LoRA-P&M against state-of- the-art ViT-based CL methods, including L2P [ 33], DualPrompt [ 58], CODA-Prompt [ 32], HiDe- Prompt [ 59], InfLoRA [ 23] and SD-LoRA [ 21], while full fine-tuning as a form of performance lower bound. We also compare against model merging methods, including model averaging, DARE [ 39], CoMA [ 22] and CoFIMA [ 22]. Following prior work [ 35], we employ ViT-B/16 [ 30], pre-trained on ImageNet-21K and fine-tuned on ImageNet-1K as the foundation model for classification. Fol- lowing [ 23], we insert LoRA (rank=10) modules into the key and value projections in multi-head attention. We set ϵ= 0.5in Eq. 15, with uniform sampling p0=p+=p−=1 3. Our method is optimized using AdamW [ 60] with an initial learning rate of 1e−3for LoRA and 1e−2for the classification head following [ 61]. We use a batch size of 256across all datasets. Each task is trained for10epochs, except for DomainNet, which is trained for 5epochs. We report the mean and standard deviation over three runs to reflect the stability of the results. All results are obtained by running on a single NVIDIA L40s GPU. 4.2 Main Results Comparison with State-of-the-Art CL Methods. As shown in Tab. 2, LoRA-P&M consistently outperforms both the original LoRA baseline and recent CL methods, including SD-LoRA [ 21], across all evaluated settings. On ImageNet-R with 5, 10, and 20 tasks, our method yields significant gains over LoRA (e.g., +14.23% in the 10-task setting) and surpasses SD-LoRA by up to +2.61%. For additional results on DomainNet, ImageNet-A, CIFAR100, and CUB200, please refer to Appendix B. Comparison with Model Merging Methods. Tab. 3 compares LoRA-P&M with LoRA and several representative model merging methods across six benchmarks. P&M delivers
https://arxiv.org/abs/2505.22389v1
substantial improvements over LoRA—up to +13.6% on INR-20 and +11.1% on C100-10—highlighting the benefits of post-training merging in preserving task knowledge. While methods such as DARE and CoFIMA offer moderate improvements over LoRA, they consistently underperform compared to P&M. Notably, P&M exceeds CoFIMA by +3.86% on CUB-10 and +2.48% on INA-10. 7 Plasticity Forgetting ACC 020406080100IMAGENET/uni00ADR/uni00A020/uni00A0tasks LoRA LoRA/uni00ADP&M Plasticity Forgetting ACC CIFAR100/uni00A010/uni00A0tasks LoRA LoRA/uni00ADP&M Plasticity Forgetting ACC DN*/uni00A010/uni00A0tasks LoRA LoRA/uni00ADP&M Plasticity Forgetting ACC CUB200/uni00A010/uni00A0tasks LoRA LoRA/uni00ADP&MFigure 1: P&M reduces forgetting with minimal impact on plasticity. Comparison of LoRA and LoRA-P&M across four benchmarks. P&M achieves similar plasticity (current task performance) while significantly mitigating forgetting (average performance drop on previous tasks), resulting in higher overall ACC. 4.3 Analysis This subsection investigates how P&M enhances CL performance. Observation ①: Scaling task vectors reduces forgetting with minimal impact on plasticity. We define plasticity as the average accuracy on the new task and forgetting as the average accuracy drop on previously learned tasks. As shown in Fig. 1, scaling the task vector during inference ( P&M ) maintains plasticity comparable to that of the standard task vector, as also observed in prior work [ 64], while significantly reducing forgetting and thus improving overall performance. Then we analyze its behavior through loss landscape visualizations and directional perturbation experiments. Specifically, the merged model is computed as: ˆθt=β·ˆθt−1+α·θ∗ t, where αis derived from Eq. 8 and β= 1−α. Figure 2 visualizes the average loss landscape on ImageNet-R (left) and CUB (right). we visualize merging after Tasks 4 and 7. Each subplot spans a 2D convex space formed by ˆθt−1andθ∗ t, where the horizontal axis indicates the task-specific model weight β, and the vertical axis indicates the previous model weight α. Each point corresponds to a merged model, colored by its average loss across all learned tasks. Mrefers to only Infer after Merging without applying Train with perturbation. Observation ②: Convex Combination lies in Low-Loss Regions. Our method determines a coefficient αand computes the merged parameter as a convex combination. This combination represents a direct interpolation between the starting and ending points of task 4’s training. As the loss contours in all eight plots reveal, the convex paths between ˆθ3andθ∗ 4,ˆθ6andθ∗ 7consistently lie in a low-loss region. This supports our design choice of convex combination over other merging methods, such as task arithmetic [65], which may move the merged model into unstable regions. Observation ③: Optimal Coefficients Locate Low-Loss Interpolation Points. We next evaluate the effectiveness of the coefficients computed by our method. In each left-hand plot (without parameter perturbation), the merged point obtained using our optimal αconsistently lies in a lower-loss region compared to either endpoint ˆθt−1orθ∗ t. This indicates that our closed-form solution reliably identifies favorable interpolation points on the loss surface, providing empirical support for the theoretical foundation of our merging formulation. Observation ④: Task-Vector Perturbation Encourages Flat Minima and Better Generalization. In each dataset’s visualization, the left plot corresponds to training with standard cross-entropy (CE) loss, while the right plot depicts training with CE loss combined with parameter perturbation. The loss contours indicate that the additional
https://arxiv.org/abs/2505.22389v1
perturbation enlarges the flatness and width of the low-loss basin around the merged model, making it more likely for the model to fall near an optimal region. This suggests that parameter perturbation helps avoid sharp minima and reduces parameter interference during model merging, thereby enhancing the generalization ability of the merged model. To further evaluate the importance of perturbation direction, we compare task-vector-based perturbation with random Gaussian noise that has the same Frobenius norm. As shown in Tab. 4, models perturbed along the task vector consistently outperform those using random noise. This demonstrates that applying perturbation in the direction of the task vector provides a reliable approximation to Eq. 11, thereby improving the performance of model merging. 8 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013 /uni0000000b/uni00000044/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni00000017/uni00000003/uni0000000b/uni00000030/uni0000000c3 * 4 4 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni0000000b/uni00000045/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni00000017/uni00000003/uni0000000b/uni00000033/uni00000009/uni00000030/uni0000000c3 * 4 4 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013 /uni0000000b/uni00000048/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni00000017/uni00000003/uni0000000b/uni00000030/uni0000000c3 * 4 4 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni0000000b/uni00000049/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni00000017/uni00000003/uni0000000b/uni00000033/uni00000009/uni00000030/uni0000000c3 * 4 4 /uni00000014/uni00000011/uni00000016/uni00000016/uni00000015/uni00000014/uni00000011/uni00000017/uni00000018/uni00000015/uni00000014/uni00000011/uni00000018/uni0000001a/uni00000015/uni00000014/uni00000011/uni0000001a/uni00000014/uni00000013/uni00000015/uni00000011/uni00000015/uni0000001c/uni00000015/uni00000015/uni00000011/uni0000001c/uni0000001c/uni00000015/uni00000016/uni00000011/uni00000018/uni0000001c/uni00000015/uni00000017/uni00000011/uni00000015/uni0000001c/uni00000015/uni00000017/uni00000011/uni0000001b/uni0000001c/uni00000015/uni0000002f/uni00000052/uni00000056/uni00000056 /uni00000014/uni00000011/uni00000016/uni00000016/uni00000015/uni00000014/uni00000011/uni00000017/uni00000018/uni00000015/uni00000014/uni00000011/uni00000018/uni0000001a/uni00000015/uni00000014/uni00000011/uni0000001a/uni00000014/uni00000013/uni00000015/uni00000011/uni00000015/uni0000001c/uni00000015/uni00000015/uni00000011/uni0000001c/uni0000001c/uni00000015/uni00000016/uni00000011/uni00000018/uni0000001c/uni00000015/uni00000017/uni00000011/uni00000015/uni0000001c/uni00000015/uni00000017/uni00000011/uni0000001b/uni0000001c/uni00000015/uni0000002f/uni00000052/uni00000056/uni00000056 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013 /uni0000000b/uni00000046/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni0000001a/uni00000003/uni0000000b/uni00000030/uni0000000c6 * 7 7 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni0000000b/uni00000047/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni0000001a/uni00000003/uni0000000b/uni00000033/uni00000009/uni00000030/uni0000000c6 * 7 7 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013 /uni0000000b/uni0000004a/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni0000001a/uni00000003/uni0000000b/uni00000030/uni0000000c6 * 7 7 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni0000000b/uni0000004b/uni0000000c/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003/uni00000014/uni00000010/uni0000001a/uni00000003/uni0000000b/uni00000033/uni00000009/uni00000030/uni0000000c6 * 7 7 /uni00000014/uni00000011/uni00000017/uni00000013/uni00000015/uni00000014/uni00000011/uni00000018/uni00000015/uni00000015/uni00000014/uni00000011/uni00000019/uni00000017/uni00000015/uni00000014/uni00000011/uni0000001a/uni0000001a/uni00000015/uni00000015/uni00000011/uni00000016/uni00000019/uni00000015/uni00000015/uni00000011/uni0000001c/uni00000019/uni00000015/uni00000016/uni00000011/uni00000019/uni00000019/uni00000015/uni00000017/uni00000011/uni00000015/uni00000019/uni00000015/uni00000017/uni00000011/uni0000001b/uni00000019/uni00000015/uni0000002f/uni00000052/uni00000056/uni00000056 /uni00000014/uni00000011/uni00000017/uni00000013/uni00000015/uni00000014/uni00000011/uni00000018/uni00000015/uni00000015/uni00000014/uni00000011/uni00000019/uni00000017/uni00000015/uni00000014/uni00000011/uni0000001a/uni0000001a/uni00000015/uni00000015/uni00000011/uni00000016/uni00000019/uni00000015/uni00000015/uni00000011/uni0000001c/uni00000019/uni00000015/uni00000016/uni00000011/uni00000019/uni00000019/uni00000015/uni00000017/uni00000011/uni00000015/uni00000019/uni00000015/uni00000017/uni00000011/uni0000001b/uni00000019/uni00000015/uni0000002f/uni00000052/uni00000056/uni00000056/uni0000002c/uni00000050/uni00000044/uni0000004a/uni00000048/uni00000031/uni00000048/uni00000057/uni00000010/uni00000035/uni00000003/uni00000014/uni00000013/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056 /uni00000026/uni00000038/uni00000025/uni00000003/uni00000014/uni00000013/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056Figure 2: Loss landscape visualization on ImageNet-R and CUB. M denotes using only Infer after Merging (no perturbation during training). Each subplot shows the average loss surface after merging at Task 4 and Task 7, with axes representing weights αandβin the convex combination ˆθt=βˆθt−1+αθ∗ t. The convex path lies in a low-loss region (Obs. ②), and our optimal αconsistently locates near the minimum (Obs. ③). Task-vector perturbation further enlarges the flat region (Obs. ④). Table 4: The ablation study of the proposed P&M. MethodINR-10 INR-20 INA-10 DN*-5 C100-10 CUB-10 Acc↑ Acc↑ Acc↑ Acc↑ Acc↑ Acc↑ LoRA 65.72 56.35 44.41 71.81 72.58 64.82 LoRA-M 78.35 74.26 56.16 81.28 86.57 74.98 LoRA-M w/ gauss noise 78.48 74.13 49.51 83.00 85.83 74.09 LoRA-P&M 79.95 76.37 56.57 84.71 88.45 78.29 In summary, P&M improves CL through three key aspects: (1) Post-training scaling mitigates forgetting; (2) The convex combination of models, along with a theoretically grounded optimal coefficient, ensures that the merged model lies near an optimal region; (3) Task-vector perturbation perturbations enhance generalization and reduce parameter interference. 5 Conclusion We propose P&M , a novel CL framework that incorporates post-training model merging into the learning paradigm. By combining a theoretically grounded merging strategy with task-vector-aligned perturbations, P&M effectively mitigates catastrophic forgetting. Our approach merges models via a convex combination of the current task optimum and the previous model, with the optimal coefficient derived from a closed-form, loss-based objective. To further enhance robustness, we introduce a lightweight regularization mechanism during training that applies stochastic perturbations along the task vector direction to improve the performance of the merged model. Integrated with LoRA, P&M offers a memory-efficient solution and achieves strong performance across various CL benchmarks. Experimental results show that unifying training-time dynamics with post-training merging provides a simple yet effective strategy for building continual learners with strong generalization and stability. Limitation. Our method estimates the optimal merging coefficient using an analytical form based on
https://arxiv.org/abs/2505.22389v1
the diagonal empirical Fisher Information Matrix. However, this diagonal approximation may not fully capture the true curvature of the loss landscape, and thus does not always guarantee optimality. Exploring more accurate yet efficient curvature approximations, is a direction for future work. 9 References [1]German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Networks , 113:54–71, 2019. [2]Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671 , 2016. [3]Steven C. Y . Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen. Compacting, picking and growing for unforgetting continual learning. In Advances in Neural Information Processing Systems , pages 13647–13657, 2019. [4]Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In Proceedings of the International Conference on Machine Learning , pages 3925–3934, 2019. [5]Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In ICML , pages 3987–3995, 2017. [6]Sangwon Jung, Hongjoon Ahn, Sungmin Cha, and Taesup Moon. Continual learning with node-importance based adaptive group sparse regularization. Advances in Neural Information Processing Systems , pages 3647–3658, 2020. [7]Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuyte- laars. Memory aware synapses: Learning what (not) to forget. In ECCV , pages 139–154, 2018. [8]James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. PNAS , 114(13):3521–3526, 2017. [9]Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, and Lucas Page-Caccia. Online continual learning with maximal interfered retrieval. In Advances in Neural Information Processing Systems , pages 11849–11860, 2019. [10] Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems , pages 11816–11825, 2019. [11] Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, and Liang Wan. Exploring example influence in continual learning. Advances in Neural Information Processing Systems , pages 27075–27086, 2022. [12] Yan-Shuo Liang and Wu-Jun Li. Loss decoupling for task-agnostic continual learning. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems , 2023. [13] Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Ha- jishirzi, and Ali Farhadi. Editing models with task arithmetic. In ICLR , 2023. [14] Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal. Ties-merging: Resolving interference when merging models. In NeurIPS , 2023. [15] Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guibing Guo, Xingwei Wang, and Dacheng Tao. Adamerging: Adaptive model merging for multi-task learning. In ICLR , 2024. [16] Michael Matena and Colin Raffel. Merging models with fisher-weighted averaging. In NeurIPS , 2022. [17] Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, and Pengxiang Cheng. Dataless knowledge fusion by merging weights of language models. In ICLR , 2023. [18] Nico Daheim,
https://arxiv.org/abs/2505.22389v1
Thomas Möllenhoff, Edoardo Maria Ponti, Iryna Gurevych, and Moham- mad Emtiyaz Khan. Model merging by uncertainty-based gradient matching. In ICLR , 2024. 10 [19] Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, and Li Shen. Deep model fusion: A survey. arXiv preprint arXiv:2309.15698 , 2023. [20] Enneng Yang, Li Shen, Guibing Guo, Xingwei Wang, Xiaochun Cao, Jie Zhang, and Dacheng Tao. Model merging in llms, mllms, and beyond: Methods, theories, applications and opportu- nities. arXiv preprint arXiv:2408.07666 , 2024. [21] Yichen Wu, Hongming Piao, Long-Kai Huang, Renzhen Wang, Wanhua Li, Hanspeter Pfister, Deyu Meng, Kede Ma, and Ying Wei. Sd-lora: Scalable decoupled low-rank adaptation for class incremental learning. [22] Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione, and Stéphane Lathuilière. Weighted ensemble models are strong continual learners. In ECCV , 2024. [23] Yan-Shuo Liang and Wu-Jun Li. Inflora: Interference-free low-rank adaptation for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 23638–23647, 2024. [24] Hao Yu, Xin Yang, Xin Gao, Yan Kang, Hao Wang, Junbo Zhang, and Tianrui Li. Personalized federated continual learning via multi-granularity prompt. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 4023–4034, 2024. [25] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. [26] Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. Continual learning of context-dependent processing in neural networks. Nature Machine Intelligence , 1(8):364–372, 2019. [27] Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. Orthogonal gradient descent for continual learning. In AISTATS , pages 3762–3773, 2020. [28] Gobinda Saha, Isha Garg, and Kaushik Roy. Gradient projection memory for continual learning. InICLR , 2021. [29] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 16000–16009, 2022. [30] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations , 2021. [31] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics , pages 4171–4186, 2019. [32] James Seale Smith, Leonid Karlinsky, Vyshnavi Gutta, Paola Cascante-Bonilla, Donghyun Kim, Assaf Arbelle, Rameswar Panda, Rogerio Feris, and Zsolt Kira. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11909–11919, 2023. [33] Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 139–149, 2022. [34] Muhammad Gul Zain Ali Khan,
https://arxiv.org/abs/2505.22389v1
Muhammad Ferjad Naeem, Luc Van Gool, Didier Stricker, Federico Tombari, and Muhammad Zeshan Afzal. Introducing language guidance in prompt- based continual learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 11463–11473, 2023. 11 [35] Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, and Jian Zhang. A unified continual learning framework with general parameter-efficient tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 11449–11459, 2023. [36] Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. In NeurIPS , volume 34, pages 22405–22418, 2021. [37] Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In ICML , pages 23965–23998. PMLR, 2022. [38] Keller Jordan, Hanie Sedghi, Olga Saukh, Rahim Entezari, and Behnam Neyshabur. Repair: Renormalizing permuted activations for interpolation repair. In ICLR , 2023. [39] Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In Forty-first International Conference on Machine Learning , 2024. [40] Zafir Stojanovski, Karsten Roth, and Zeynep Akata. Momentum-based weight interpolation of strong zero-shot models for continual learning. In NeurIPS Workshop , 2022. [41] Christian Simon, Masoud Faraki, Yi-Hsuan Tsai, Xiang Yu, Samuel Schulter, Yumin Suh, Mehrtash Harandi, and Manmohan Chandraker. On generalizing beyond domains in cross- domain continual learning. In CVPR , pages 9265–9274, 2022. [42] Janghyeon Lee, Donggyu Joo, Hyeong Gwon Hong, and Junmo Kim. Residual continual learning. In AAAI , volume 34, pages 4553–4560, 2020. [43] Guoliang Lin, Hanlu Chu, and Hanjiang Lai. Towards better plasticity-stability trade-off in incremental learning: A simple linear connector. In CVPR , pages 89–98, 2022. [44] Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. In NeurIPS , volume 30, 2017. [45] Daniel Marczak, Bartłomiej Twardowski, Tomasz Trzci ´nski, and Sebastian Cygert. Magmax: Leveraging model merging for seamless continual learning. In ECCV , 2024. [46] Yeoreum Lee, Jinwook Jung, and Sungyong Baik. Mitigating parameter interference in model merging via sharpness-aware fine-tuning. arXiv preprint arXiv:2504.14662 , 2025. [47] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged con- sistency targets improve semi-supervised deep learning results. Advances in neural information processing systems , 30, 2017. [48] Qin Wang, Olga Fink, Luc Van Gool, and Dengxin Dai. Continual test-time domain adaptation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7201–7211, 2022. [49] James C Spall. Monte carlo computation of the fisher information matrix in nonstandard settings. volume 14, pages 889–909. Taylor & Francis, 2005. [50] James C Spall. Improved methods for monte carlo estimation of the fisher information matrix. In2008 American Control Conference , pages 2395–2400. IEEE, 2008. [51] Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal
https://arxiv.org/abs/2505.22389v1
subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152 , 2023. [52] Matteo Boschini, Lorenzo Bonicelli, Angelo Porrello, Giovanni Bellitto, Matteo Pennisi, Si- mone Palazzo, Concetto Spampinato, and Simone Calderara. Transfer without forgetting. In Proceedings of the European Conference on Computer Vision , pages 692–709, 2022. 12 [53] Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Christina Kwon, and Jacob Steinhardt. Natural adversarial examples. In IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 15262–15271, 2021. [54] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision , pages 1406–1415, 2019. [55] A Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Tront , 2009. [56] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The Caltech- UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. [57] Yue Lu, Shizhou Zhang, De Cheng, Yinghui Xing, Nannan Wang, Peng Wang, and Yan- ning Zhang. Visual prompt tuning in null space for continual learning. arXiv preprint arXiv:2406.05658 , 2024. [58] Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, et al. Dualprompt: Complementary prompting for rehearsal-free continual learning. In ECCV , pages 631–648, 2022. [59] Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, and Jun Zhu. Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. In Advances in Neural Information Processing Systems , pages 69054–69076, 2024. [60] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR , 2018. [61] Gengwei Zhang, Liyuan Wang, Guoliang Kang, Ling Chen, and Yunchao Wei. Slca: Slow learner with classifier alignment for continual learning on a pre-trained model. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 19148–19158, 2023. [62] Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, and Jun Zhu. Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. arXiv preprint arXiv:2310.07234 , 2023. [63] Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione, and Stéphane Lathuilière. Weighted ensemble models are strong continual learners. In European Conference on Computer Vision , pages 306–324. Springer, 2024. [64] Ke Wang, Nikolaos Dimitriadis, Alessandro Favero, Guillermo Ortiz-Jimenez, Francois Fleuret, and Pascal Frossard. Lines: Post-training layer scaling prevents forgetting and enhances model merging. 2024. [65] Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Ha- jishirzi, and Ali Farhadi. Editing models with task arithmetic. In ICLR , 2023. [66] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 248–255, 2009. 13 A Proof and Theoretical Details A.1 Proof of Eq. 8 To determine the optimal α∗ t, we consider the following optimization problem: α∗ t= arg min αttX i=1δi≈arg min αttX i=11 2 ˆθt−1−θ∗ i⊤ Hi(θ∗ i) ˆθt−1−θ∗ i , (18) where ˆθt−1denotes the inference-time model parameters after learning on tasks 1through t−1, and θ∗ irepresents the optimal parameters for
https://arxiv.org/abs/2505.22389v1
task i. The matrix Hi(θ∗ i)denotes the Hessian of the loss function for task ievaluated at its optimum θ∗ i. Substituting ˆθt=ˆθt−1+αt∆θ∗ t,into the objective, we obtain: tX i=11 2 ˆθt−1+αt∆θ∗ t−θ∗ i⊤ Hi(θ∗ i) ˆθt−1+αt∆θ∗ t−θ∗ i . (19) Expanding the quadratic form inside the summation yields: tX i=11 2h (ˆθt−1−θ∗ i)⊤Hi(θ∗ i)(ˆθt−1−θ∗ i) + 2αt(ˆθt−1−θ∗ i)⊤Hi(θ∗ i)∆θ∗ t+α2 t(∆θ∗ t)⊤Hi(θ∗ i)∆θ∗ ti . (20) Define the objective function J(αt)as: J(αt) =tX i=11 2h (ˆθt−1−θ∗ i)⊤Hi(θ∗ i)(ˆθt−1−θ∗ i) + 2αt(ˆθt−1−θ∗ i)⊤Hi(θ∗ i)∆θ∗ t+α2 t(∆θ∗ t)⊤Hi(θ∗ i)∆θ∗ ti . (21) Taking the derivative of J(αt)with respect to αt, we get: dJ dαt=tX i=1h (ˆθt−1−θ∗ i)⊤Hi(θ∗ i)∆θ∗ t+αt(∆θ∗ t)⊤Hi(θ∗ i)∆θ∗ ti . (22) SettingdJ dαt= 0leads to the first-order optimality condition: tX i=1(ˆθt−1−θ∗ i)⊤Hi(θ∗ i)∆θ∗ t+αttX i=1(∆θ∗ t)⊤Hi(θ∗ i)∆θ∗ t= 0. (23) Solving for αt, we obtain the optimal αt: α∗ t=−Pt i=1(ˆθt−1−θ∗ i)⊤Hi(θ∗ i)∆θ∗ tPt i=1(∆θ∗ t)⊤Hi(θ∗ i)∆θ∗ t. (24) A.2 Details of Eq. 11 Further, we want to reducePt i=1δito make the combined model better. It is evident thatPt i=1δi≥Pt i=1δi(α∗ t)holds. We also have tX i=1δi(α∗ t) =1 2tX i=1 ˆθt−1−θ∗ i⊤ Hi ˆθt−1−θ∗ i −Pt i=1 ˆθt−1−θ∗ i⊤ Hi∆θ∗ t2 2Pt i=1∆θ∗⊤ tHi∆θ∗ t(25) ≤1 2tX i=1 ˆθt−1−θ∗ i⊤ Hi ˆθt−1−θ∗ i (26) 14 The inequality holds because the second term is negative. We next explain why it is reasonable to ignore the second term and focus only on the first. The first term on the right-hand side dominates, as it is typically larger in magnitude and easier to compute, while the second term involves nested quadratic forms and is costly to evaluate. Therefore, we retain only the leading term as an approximation and obtain the upper bound: tX i=1δi(α∗ t)≤1 2tX i=1 ˆθt−1−θ∗ i⊤ Hi ˆθt−1−θ∗ i (27) We next show why the first term is the dominant one. We aim to show that for the decomposition: tX i=1δi(α∗ t) =1 2tX i=1 ˆθt−1−θ∗ i⊤ Hi ˆθt−1−θ∗ i −Pt i=1 ˆθt−1−θ∗ i⊤ Hi∆θ∗ t2 2Pt i=1∆θ∗⊤ tHi∆θ∗ t, the second term is always less than or equal to the first, i.e., Pt i=1a⊤ iHiv2 Pt i=1v⊤Hiv≤tX i=1a⊤ iHiai. (28) Proof. LetA=Pt i=1a⊤ iHiaiandB=Pt i=1v⊤Hiv, and define: C=tX i=1a⊤ iHiv=⟨a, v⟩H, (29) where ⟨·,·⟩Hdenotes a generalized inner product defined via weighted Hessians. Then by the Cauchy-Schwarz inequality in inner product spaces: C2= tX i=1a⊤ iHiv!2 ≤ tX i=1a⊤ iHiai! · tX i=1v⊤Hiv! =A·B. (30) Therefore: C2 B≤A⇒ Second term ≤First term . (31) This proves that, in the decomposition where the objective is the first term minus the second, the second term is always less than or equal to the first. As a result, the first term dominates. Therefore, we focus on optimizing the first term through regularization, which is likely to reduce the overall objective as well. B More Experimental Results and Details Details of Datasets. ImageNet-R contains 200ImageNet [ 66] classes rendered in various artistic styles. ImageNet-A comprises 200classes featuring natural adversarial examples that are typically misclassified by standard ImageNet-trained models. DomainNet covers 345object categories across six distinct visual domains. CIFAR100 is a classic image classification dataset
https://arxiv.org/abs/2505.22389v1
with 60,000images equally distributed across 100classes. CUB200 is a fine-grained bird classification dataset containing 11,788images over 200categories. More results on DomainNet, ImageNet-A, CIFAR100, and CUB200. As in shown in Tab. 5 and 6 Across all four benchmarks, LoRA-P&M consistently outperforms both standard LoRA and 15 Table 5: Performance comparison on ImageNet-A and DomainNet. MethodImageNet-A 10 tasks DomainNet 5 tasks Acc↑ AAA↑ Acc↑ AAA↑ Full Fine-Tuning 16.31±7.89 30.04 ±13.18 51.46±0.47 67.08 ±1.13 L2P [33] 42.94±1.27 51.40 ±1.95 70.26±0.25 75.83 ±0.98 DualPrompt [58] 45.49±0.96 54.68 ±1.24 68.26±0.90 73.84 ±0.45 CODA-Prompt [32] 45.36±0.78 57.03 ±0.94 70.58±0.53 76.68 ±0.44 HiDe-Prompt [59] 42.70±0.60 56.32 ±0.40 72.20±0.08 77.01 ±0.04 InfLoRA [23] 49.20±1.12 60.92 ±0.61 71.59±0.23 78.29 ±0.50 SDLoRA [21] 55.96±0.73 64.95 ±1.63 72.82±0.37 78.89 ±0.50 SD-LoRA-RR [21] 55.59±1.08 64.59 ±1.91 72.58±0.40 78.79 ±0.78 SD-LoRA-KD [21] 54.24±1.12 63.89 ±0.58 72.15±0.50 78.44 ±0.66 LoRA 44.41±0.57 56.76 ±3.54 64.82±2.30 75.28 ±1.56 LoRA-P &M 56.57±0.78 65.35 ±1.81 76.91±0.53 85.27 ±0.07 Table 6: Performance comparison on CIFAR100 and CUB200. MethodCIFAR100 10 tasks CUB200 10 tasks Acc↑ AAA↑ Acc↑ AAA↑ Full Fine-Tuning 69.49±0.50 80.35 ±0.87 51.43±1.41 69.74 ±0.93 L2P [33] 83.18±1.20 87.69 ±1.05 65.18±2.49 76.12 ±1.27 DualPrompt [58] 81.48±0.86 86.41 ±0.66 68.00±1.06 79.40 ±0.88 CODA-Prompt [32] 86.31±0.12 90.67 ±0.22 71.92±0.33 78.76 ±0.65 InfLoRA [23] 86.75±0.35 91.72 ±0.15 70.82±0.23 81.39 ±0.14 SD-LoRA [21] 88.01±0.31 92.54 ±0.18 77.48±0.20 85.59±0.44 LoRA 72.58±1.57 80.64 ±2.31 64.82±2.30 75.28 ±1.56 LoRA-P &M 88.45±0.35 92.89 ±1.13 78.29±0.50 83.39±0.61 the recent SD-LoRA. On challenging datasets such as ImageNet-A and DomainNet, it achieves the highest accuracy and AAA, surpassing SD-LoRA by up to +0.61% and +6.38%, respectively. Notably, on CIFAR100 and CUB200, LoRA-P&M matches or exceeds SD-LoRA’s accuracy while maintaining lower variance. These results highlight the robustness and generalization advantage of post-training task vector merging over both naive LoRA and structured dynamic variants like SD-LoRA. Hyperparameter ablation. We conduct ablation studies on the hyperparameters p0(probability of zero perturbation) and ϵ(perturbation magnitude) on the ImageNet-A dataset. As shown in Figure 3, we observe that setting p0= 0.33andϵ= 0.5yields consistently good performance, though not necessarily optimal. To reduce tuning overhead and maintain consistency, we use p0= 0.33and ϵ= 0.5as default values for all experiments. Interestingly, when p0= 0.33,p0=p+=p−. Memory and Training Time Analysis. We compare the memory footprint and training time of LORAand our proposed LORA-P&M on ImageNet-R and CIFAR100. As shown in Tab. 7, LORA- P&M introduces only a modest increase in memory usage—approximately 1.5 GB compared to standard LoRA—due to compute and store the diagonal empirical Fisher Information Matrix, while the cost introduced by stochastic perturbation is negligible. . Similarly, the training time increases by 0.2 to 0.4 hours per dataset. Despite this overhead, LORA-P&M achieves substantial accuracy gains: +14.23% on ImageNet-R and +15.87% on CIFAR100. These results demonstrate that our method offers a favorable trade-off between performance and resource efficiency. 16 0.000.10 0.33 0.50 0.66 0.80 1.00 p0545658Accuracy (%) Effect ofp0(/epsilon1=0.5) 0.00.1 0.3 0.5 0.7 0.91.0 /epsilon1 Effect of/epsilon1(p0=0.33)Figure 3: Hyperparameter ablation on ImageNet-A. Left: performance under varying p0values (probability of zero perturbation). Right: performance under different ϵvalues (magnitude of perturbation). Table 7: Comparison of memory usage, training time, and accuracy between
https://arxiv.org/abs/2505.22389v1
arXiv:2505.22391v1 [cs.LG] 28 May 2025Physics-Informed Distillation of Diffusion Models for PDE-Constrained Generation Yi Zhang Institute of Data Science The University of Hong Kong yizhang101@connect.hku.hkDifan Zou Institute of Data Science & School of Computing and Data Science The University of Hong Kong dzou@cs.hku.hk Abstract Modeling physical systems in a generative manner offers several advantages, in- cluding the ability to handle partial observations, generate diverse solutions, and address both forward and inverse problems. Recently, diffusion models have gained increasing attention in the modeling of physical systems, particularly those gov- erned by partial differential equations (PDEs). However, diffusion models only access noisy data xtat intermediate steps, making it infeasible to directly enforce constraints on the clean sample x0at each noisy level. As a workaround, con- straints are typically applied to the expectation of clean samples E[x0|xt], which is estimated using the learned score network. However, imposing PDE constraints on the expectation does not strictly represent the one on the true clean data, known asJensen’s Gap . This gap creates a trade-off: enforcing PDE constraints may come at the cost of reduced accuracy in generative modeling. To address this, we propose a simple yeteffective post-hoc distillation approach, where PDE constraints are not injected directly into the diffusion process, but instead enforced during a post-hoc distillation stage. We term our method as Physics- Informed Distillation ofDiffusion Models (PIDDM). This distillation not only facilitates single-step generation with improved PDE satisfaction, but also support both forward and inverse problem solving and reconstruction from randomly partial observation. Extensive experiments across various PDE benchmarks demonstrate that PIDDM significantly improves PDE satisfaction over several recent and competitive base- lines, such as PIDM [ 3], DiffusionPDE [ 23], and ECI-sampling [ 8], with less computation overhead. Our approach can shed light on more efficient and effective strategies for incorporating physical constraints into diffusion models. 1 Introduction Solving partial differential equations (PDEs) underpins innumerable applications in physics, biology, and engineering, spanning fluid flow [ 11], heat transfer [ 25], elasticity [ 59], electromagnetism [ 26], and chemical diffusion [ 10]. Classical discretisation schemes such as finite-difference [ 55], and finite-element methods [ 32] provide reliable solutions, but their computational cost grows sharply with mesh resolution, dimensionality, and parameter sweeps, limiting their practicality for large-scale or real-time simulations [ 24]. This bottleneck has fuelled a surge of learning-based solvers that approximate or accelerate PDE solutions, from early physics-informed neural networks (PINNs) [ 49] to modern operator-learning frameworks such as DeepONet [ 42], Fourier Neural Operators [ 34] and Physics-informed Neural Operator [ 35], offering faster inference, uncertainty quantification, and seamless integration into inverse or data-driven tasks. Among these learning-based solvers, diffusion models [ 21,58] have emerged as a promising frame- work for generative learning of physical systems. In the context of PDEs, a diffusion model can Preprint. Under review. Figure 1: Illustration of physics-constrained diffusion generation and our proposed framework. (a)Existing methods [ 23,8,3,27] impose PDE losses or guidance on the posterior mean E[x0|xt] in diffusion training and sampling, introducing Jensen’s Gap. (b)We propose to train and sample diffusion model using vanilla methods to generate
https://arxiv.org/abs/2505.22391v1
a noise-image data paired dataset for distillation. (c)Our proposed framework distills the teacher diffusion model and directly enforces physical constraints on the final generated sample x0, avoiding Jensen’s Gap . learn the joint distribution over solution and coefficient fields x0= (u,a)from observed data, where adenotes input parameters satisfying boundary condition B,(e.g., material properties or initial conditions) and udenotes the corresponding solution satisfying the PDE system F. Once trained, the model can generate new samples (u,a)from this learned distribution, enabling applications such as forward simulation (sample ufroma), inverse recovery (sample afromu), or conditional reconstruction (recover missing parts of uora). While diffusion models perform well on conditional generation with soft, high-level constraints [ 51,14,22,9,4], PDE applications demand that outputs satisfy strict, low-level constraints specified by the operators FandB. However, enforcing PDE constraints within diffusion models is nontrivial. A core difficulty is that, at individual noise level t, diffusion models operate on noisy variables xtrather than the clean physical field x0, where constraints such as F[x0] = 0 are defined. Once noise is added, x0becomes inaccessible from xt, making it impractical to apply physical constraints, as shown in the left part of Fig. 1 (a). While one option is to reconstruct x0by running the full deterministic sampling trajectory, this approach is computationally expensive, as it requires many forward passes through the diffusion model [ 3]. A more common alternative is to approximate x0with the posterior mean E[x0|xt], which can be efficiently computed via Tweedie’s formula [ 3,23,8,27] (see right part of Fig. 1 (a)). However, this introduces a theoretical inconsistency: enforcing constraints on the posterior mean, F[E[x0|xt]], is not equivalent to enforcing the expected constraint, E[F[x0]|xt], due to Jensen’s inequality. This mismatch, known as the Jensen’s Gap [3], can lead to degraded physical fidelity. Overcoming this gap is crucial for reliable physics-constrained generation with diffusion models. Contributions. We propose a simple yet effective framework that enforces PDE constraints in diffusion models via post-hoc distillation, enabling reliable and efficient generation under physical laws. As shown in Fig. 1 (c), our method sidesteps the limitations of existing constraint-guided diffusion-based approaches by decoupling physics enforcement from the diffusion trajectory. Our main contributions are: •Empirical confirmation of Jensen’s Gap: We provide the first explicit empirical demonstration and quantitative analysis of the Jensen’s Gap —a fundamental discrepancy that arises when PDE constraints are imposed on the posterior mean E[x0|xt]at intermediate noise levels, rather than directly on the final clean sample x0. 2 •Theoretically sound: Our method circumvents the Jensen’s Gap by enforcing PDE constraints directly on the final generated samples. Unlike guidance-based approaches that often trade off generative quality to improve constraint satisfaction, our post-hoc distillation preserves both physical accuracy and distributional fidelity. •Versatile and efficient inference: The distilled student model retains the full generative capabilities of the original teacher model, including physical simulation, partial observation reconstruction, and unified forward/inverse PDE solving, while supporting one-step generation for faster inference. Comprehensive experiments on various physical equations demonstrate that PIDDM outperforms existing guidance-based methods [23, 8, 3, 27] in both generation quality and PDE satisfaction. 2 Related Work 2.1 Diffusion Model Diffusion
https://arxiv.org/abs/2505.22391v1
models [ 58,21,28] learn a score function, ∇logp(xt), to reverse a predefined diffusion process, typically of the form xt=x0+σtε. A key characteristic of diffusion models is that sampling requires iteratively reversing this process over a sequence of timesteps. This iterative nature presents a challenge for controlled generation: to guide the sampling trajectory effectively, we often need to first estimate the current denoised target x0in order to determine the correct guidance direction. In other words, to decide how to get there, we must first understand where we are . However, obtaining this information through full iterative sampling is computationally expensive and often impractical in optimization regime. A practical workaround is to leverage an implicit one-step data estimate provided by diffusion models via the Tweedie’s formula [13], which requires only a single network forward pass: ˆx0≈E[x0|xt] =xt+σ2 t∇logp(xt), where ˆx0denotes the final denoised sample from xtusing deterministic sampler. This gap is bridged when t→0. Although this posterior mean E[x0|xt]is not theoretically equivalent to the final sample obtained after full denoising, in practice, this estimate serves as a useful proxy for the underlying data and enables approximate guidance for controlled generation, without the need to complete the entire sampling trajectory. 2.2 Constrained Generation for PDE Systems Diffusion models have demonstrated strong potential for physical-constraint applications due to their generative nature. This generative capability naturally supports the trivial task of simulating physical data and also extends to downstream applications such as reconstruction from partial observations and solving both forward and inverse problems. However, many scientific tasks require strict adherence to physical laws, often expressed as PDE constraints on the data. These constraints, applied at the sample level x, are not easily enforced within diffusion models, which are trained to model the data distribution p(x). To address this, prior works have proposed three main strategies for incorporating physical constraints into diffusion models. Training-time Loss Injection. PG-Diffusion [ 54] employs Classifier-Free Guidance (CFG), where a conditional diffusion model is trained using the PDE residual error as a conditioning input. How- ever, CFG is well known to suffer from theoretical inconsistencies—specifically, the interpolated conditional score function does not match the true conditional score—which limits its suitability for enforcing precise physical constraints. To avoid this issue, PIDM [ 3] introduces a loss term based on the residual evaluated at the posterior mean, E[x0|xt]. While this approach avoids the theoretical pitfalls of CFG, the constraint is still not imposed on the actual sample x0, leading to what PIDM identifies as the Jensen’s Gap . Sampling-time Guidance. Diffusion Posterior Sampling (DPS), used in DiffusionPDE [ 23] and CoCoGen [ 27], applies guidance during each sampling step by using the gradient of the PDE residual evaluated on the posterior mean E[x0|xt]. Therefore, they inherit the Jensen’s Gap issue, as the guidance operates on an estimate of the final sample rather than the sample itself. Moreover, DPS assumes that the residual error follows a Gaussian distribution—a condition that may not hold in real-world PDE systems. Meanwhile, to support hard constraints, ECI-sampling [ 8] directly modifies the posterior mean E[x0|xt]using known boundary conditions. 3 (a)
https://arxiv.org/abs/2505.22391v1
(b) (c) (d) Figure 2: Empirical illustration of the Jensen’s Gap in physics-constrained diffusion models. (a) Absolute velocity error and angular discrepancy ( 1−cos(θ)) between Diffusion Posterior Sampling (DPS) and the ground-truth conditional ODE velocity on the MoG dataset. (b)and(c)Histograms comparing the first (unconstrained) and second (hard-constrained) dimensions of DPS-sampled MoG data against the ground truth MoG. (d)Training-time manifestation: diffusion loss comparison between vanilla training and PIDM on a Stokes Problem dataset. Noise Prompt. Another stream of research—often called noise prompting orgolden-noise op- timisation —directly tunes the initial noise so that the resulting sample satisfies a target con- straint [ 4,16,67,62,44,6]. In the physics domain, this idea is used to minimise the true PDE residual R(x)evaluated on the final sample, rather than the surrogate residual R(E[x0|xt]). Because the constraint is imposed on the actual output, noise prompting sidesteps the Jensen’s Gap altogether and therefore serves as a strong baseline in ECI-sampling [ 8] and PIDM [ 3]. The main drawback is efficiency: optimising the noise requires back-propagating through the entire sampling trajectory, which is computationally expensive and prone to gradient instability. 2.3 Distillation of Diffusion Model Sampling in diffusion models involves integrating through a reverse diffusion process, which is computationally expensive. Even with the aid of high-order ODE solvers [ 45,56,40,41,66], parallel sampling [ 53] and better training schedule [ 28,47,37,38,39], the process remains iterative and typically requires hundreds of network forward passes. To alleviate this inefficiency, distillation-based methods have been developed to enable one-step generation by leveraging the deterministic nature of samplers (e.g., DDIM), where the noise–data pairs become fixed. The most basic formulation, Knowledge Distillation [ 43], trains a student model to replicate the teacher’s deterministic noise-to- data mapping. However, subsequent studies have shown that directly learning this raw mapping is challenging for neural networks, as the high curvature of sampling trajectories often yields noise–data pairs that are distant in Euclidean space, making the regression task ill-conditioned and hard to generalize. To address this, recent research has proposed three complementary strategies. (1) Noise–data coupling refinement: Rectified Flow [ 38] distills the sampling process into a structure approximating optimal transport, where the learned mapping corresponds to minimal-cost trajectories between noise and data. InstaFlow [ 39] further demonstrates that such near-optimal-transport couplings significantly ease the learning process for student models. (2) Distribution-level distillation: Rather than matching individual noise–data pairs, DMD [ 63] trains the student via score-matching losses that align the overall data distributions, thereby bypassing the need to regress complex mappings directly. (3) Trajectory distillation: Instead of only supervising on initial ( xT) and final ( x0) states, this approach provides supervision at intermediate states xtalong the ODE trajectory [ 5,65,57]. This decomposition allows the student model to learn the generative process in a piecewise manner, which improves stability and sample fidelity. 3 Problem Setup: Jensen’s Gap in Diffusion Model with PDE constraints In scientific machine learning, there exist many hard andlow-level constraints that are mathematically strict and non-negotiable [ 33,19,46,52], which are hard to satisfy via standard diffusion models [18,17]. In this section, we will discuss how existing works impose
https://arxiv.org/abs/2505.22391v1
these constraints in diffusion- generated data, and the Jensen’s Gap [15, 3, 23] it introduces. 4 3.1 Preliminaries on Physics constraints Physics constraints are typically expressed as a partial differential equation (PDE) Fdefined over a solution domain Ω⊂Rd, together with a boundary condition operator Bdefined on the coefficient domain Ω′: F[u(ξ)] = 0 forξ∈Ω,B[a(ξ′)] = 0 forξ′∈Ω′. (1) In practice, the domain ΩandΩ′is discretized into a uniform grid, typically of size H×W, and the field uandaare evaluated at those grid points to produce the observed data x0= (u,a), where diffusion models are trained to learn the joint distribution p(x0) =p((u,a)). While PINNs [ 49] model the mapping ξ,ξ′7→(u(ξ),a(ξ′))with differentiable neural networks to enable automatic differentiation [ 48,1], grid-based approaches commonly approximate the differential operators in F via finite difference methods [ 55,32]. To quantify the extent to which a generated sample x0violates the physical constraints, the physics residual error in often defined by: R(x0) =R((u,a)) := F[u],B[a]⊤(2) Here,R(x)measures the discrepancy between the sample xand the expected PDE Fand boundary conditions B. The physics residual loss is often defined by the squared norm of this physics residual error, i.e., ∥R(x)∥2. 3.2 Imposing PDE constraints in Diffusion Models The physical constraints Rare often defined on the clean field x0, while during training or sampling of the diffusion model, the model only observes the noisy state xt. Therefore, it is intractable to make direct optimization or controlled generation based on the physical residual loss R(x0). A practical workaround, therefore, is to evaluate the constraint on an estimate of x0fromxt, and a common choice is to use the estimated posterior mean :E[x0|xt]based on the score network in diffusion model [ 3,23]. As a simplified example, consider the forward process defined as xt=x0+σtϵ, where σtdenotes the noise level at time tandϵ∼ N(0,I)is standard Gaussian noise. Then, the posterior mean can be efficiently estimated via Tweedie’s formula: ˆxθ(xt, t) :=xt+σ2 tsθ(xt, t)≈xt+σ2 t∇logp(xt)≈E[x0|xt], (3) where sθis a learned score function approximating the gradient of the log-density (see Appendix A.2 for the derivations for the general diffusion process). Leveraging this approximation, several existing works incorporate PDE constraints by evaluating the PDE residual operator Ronˆxθ(xt, t). For instance, PIDM [3] integrates PDE constraints into diffusion model at training time by augmenting the standard diffusion objective with an additional PDE residual loss R(ˆxθ(xt, t)). Similarly, at inference time ,DiffusionPDE [23] and CoCoGen [27] employ diffusion posterior sampling (DPS) [9], guiding each intermediate sample xtusing the gradient ∇xR(ˆxθ(xt, t)). On the other hand; ECI-sampling [8] directly projects hard constraints onto the posterior mean at each DDIM step using a correction operator (more detailed discussion on their implementations can be found in Appendix C.4). While these pioneering methods have been demonstrated to be effective in enforcing PDE constraints within diffusion models, they still suffer a theoretical inconsistency: PDE constraints are enforced on the posterior mean approximation E[x0|xt], which is not equivalent to the constraints on the true generated data x0due to Jensen’s inequality: R(E[x0|xt])̸=E[R(x0)|xt]. (4) This discrepancy is commonly referred to as the Jensen’s Gap [3,23,15]. To mitigate this issue, PIDM and DiffusionPDE
https://arxiv.org/abs/2505.22391v1
heuristically down-weight PDE constraints at early denoising steps (large t) in training and sampling, respectively, where Jensen’s Gap is pronounced, and emphasize them neart→0, where the posterior mean approximation improves. ECI-sampling introduces stochastic resampling steps [ 61] to project theoretically inconsistent intermediate samples back toward their correct distribution. Although these methods provide partial practical improvements, they remain fundamentally limited by their ad hoc nature—neither resolves the underlying theoretical gap nor guarantees rigorous physical constraint satisfaction in generated outputs. 3.3 Demonstration of the Jensen’s Gap To better illustrate the presence of Jensen’s gap and its negative effect, we conduct experiments on two synthetic datasets: a Mixture-of-Gaussians (MoG) dataset and a Stokes Problem dataset. 5 Algorithm 1 PIDDM Training: Physics-Informed Distillation. Require: Teacher Model vθ(x, t), Student Model dθ′, Batch Size B, Steps Ns, Step Size dt=1/Ns, Physics Residual Error R, Loss Weight λtrain, Learning Rate ηtrain 1:repeat 2: Sample ϵ1:Bi.i.d.∼ N (0,I);xT←ϵ1:B 3: Fort=T−dt, . . . , 0xt←xt+dt−vθ(xt+dt, t+dt) dt ▷ Sampling Phase 4: xpred←dθ′(ϵ1:B) 5: L ←1 B( xpred−x0 2+λtrain R(ˆx) 2) ▷Distillation Phase 6: θ′←θ′−ηtrain∇θ′L 7:until Converged Sampling-time Jensen’s Gap. We demonstrate the sampling-time Jensen’s Gap using the Mixture- of-Gaussians (MoG) dataset, where the score function is analytically tractable, allowing us to isolate the effect of the diffusion process without interference from training error. The MoG is constructed in 2D: the first dimension follows a bimodal Gaussian distribution, while the second dimension encodes a discrete latent variable that serves as a hard constraint. Concretely, the joint distribution is defined as a mixture of two Gaussians, each supported on a distinct horizontal line: p(x0) = 0 .5· N(x1;−1, σ2)·δ(x2+ 1) + 0 .5· N(x1; +1, σ2)·δ(x2−1), (5) where δ(·)denotes the Dirac delta function and σ= 0.2. To examine the impact of Jensen’s Gap during sampling, we compare Diffusion Posterior Sampling (DPS) [ 9] which uses a latent code to guide the generation, with the ground-truth conditional ODE trajectory derived analytically. We evaluate three representative diffusion processes: Variance-Preserving (VP) [ 21], Sub-VP [ 58], and Linear [ 37], and compare their velocity field during the inference for characterizing Jensen’s Gap. To quantify amplitude errors, we compute the mean absolute error (MAE) and angular error between the DPS-predicted velocity field vDPS(x, t)and the ground-truth velocity vGT(x, t): We observe that both of these errors of DPS are significantly elevated at intermediate timesteps and only diminish as t→0, as shown in Fig.2a. Although DPS achieves accurate sampling in the unconstrained dimension (Fig.2b), it fails to respect the hard constraint in the constrained dimension (Fig. 2c). Training-time Jensen’s Gap. To examine the Jensen’s Gap during training, we use the synthetic Stokes Problem dataset as the target modeling distribution pStokes . The diffusion model vθadopts a Fourier Neural Operator (FNO) [ 34] architecture, and the diffusion process follows a standard linear noise schedule [ 36,38,39]. Further details of the dataset and training configuration are provided in Appendices B and C. We take PIDM [ 3] as an example, which explicitly adds the PDE loss R(ˆxθ(xt, t))to the diffusion loss, and compare its performance with the standard
https://arxiv.org/abs/2505.22391v1
diffusion training. To evaluate generative performance, we track the diffusion loss, which theoretically serves as an evidence lower bound (ELBO) [ 21,31,12,50,13]. The comparison results are shown in Fig. 2d, revealing a significant increase in diffusion loss when the PDE residual loss is incorporated. This suggests that the PDE residual loss does not help better shape the data distribution that satisfies the PDE constraints. This observation also corroborates findings from PIDM [ 3], which identified that residual supervision on the posterior mean can create “a conflicting objective between the data and residual loss”, where the data loss represents the original diffusion training objective. These results provide further evidence for the existence of the Jensen’s Gap in training, as enforcing constraints on E[x0|xt]may interfere with maximizing the likelihood of the true data distribution. 4 Method: Physics-Informed Distillation of Diffusion Models In the previous Section 3, we have demonstrated the existence of the Jensen’s Gap when incorporating physical constraints into diffusion training and sampling, as observed in prior works. To address this issue, we propose a distillation-based framework that theoretically bypasses the Jensen’s Gap. In specific, instead of enforcing constraints on the posterior mean during the diffusion process which introduces a trade-off with generative accuracy, we apply physical constraints directly to the final generated samples in a post-hoc distillation stage. 6 Algorithm 2 PIDDM Inference: Physics Data Simulation 1:Input Student Model dθ′, Physics Residual Error R, Refinement Step Number Nf, Refinement Step Size ηref, Latent Noise ϵ∼ N(0,I). 2:Fori= 1, . . . , N f:ϵ←ϵ−ηref∇ϵ∥R(dθ′(ϵ))∥2▷PDE refinement step (optional). 3:Output dθ′(ϵ) Algorithm 3 PIDDM Inference for Forward/Inverse/Reconstruction 1:Input Student Model dθ′, Physics Residual Error R, Optimization Iteration No, Step Size ηinfer, Observation x′, Observation Mask M, Loss Weight λinfer, Latent Noise ϵ∼ N(0,I). 2:fori= 1, . . . , N odo 3: xmix←x′⊙M+dθ′(ϵ)⊙(1−M) 4: ϵ←ϵ−ηinfer∇ϵ[∥(dθ′(ϵ)−x′)⊙M∥2+λinfer∥R(xmix)∥2] 5:end for 6:x←x′⊙M+dθ′(ϵ)⊙(1−M) 7:Output x 4.1 Diffusion Training To decouple physical constraint enforcement from the diffusion process itself, we first conduct standard diffusion model training using its original denoising objective, without adding any constraint- based loss. To achieve smoother sampling trajectory which benefits later noise-data distillation [ 38, 39], we adopt a linear diffusion process and apply the v-prediction parameterization [ 38,36,39,8,14], which is commonly referred to as a flow model. In specific, the training objective is defined as: L(θ) =Et∼U(0,1),x0∼p(x0),ϵ∼cN(0,I)∥(vθ(xt, t)−(ϵ−x0)∥2,xt= (1−t)x0+tϵ,(6) where p(x0)is the distribution of joint data containing both solution and coefficient fields x= (u,a), ϵis sampled from a standard Gaussian distribution, and vθis the neural network as the diffusion model. This formulation allows the model to learn to reverse the diffusion process without entangling it with physical supervision, thereby preserving generative fidelity. 4.2 Imposing PDE Constraints in Distillation After training the teacher diffusion model using the standard denoising objective, we proceed to the distillation stage, where we transfer its knowledge to a student model designed for efficient one-step generation. Crucially, this post-hoc distillation stage is where we impose PDE constraints, thereby avoiding the Jensen’s Gap observed in prior works that apply constraints during diffusion training or sampling. This distillation process is guided by two complementary
https://arxiv.org/abs/2505.22391v1
objectives: (1) learning to map a noise sample to the final generated output predicted by the teacher model, and (2) enforcing physical consistency on this output via PDE residual minimization. Concretely, we begin by sampling a noise input ε∼ N(0,I)and generate a target sample x0using the pre-trained teacher model via deterministic integration of the reverse-time ODE: xt−dt=xt−vθ(xt, t) dt, (7) which proceeds from t= 1tot= 0using a fixed step size dt. This yields a paired noise-data dataset D={ϵ,x0}for distillation, as shown in Fig. 1 (b). Then a student model dθ′(ϵ)is trained to predict x0in one step, as shown in Fig. 1 (c). Meanwhile, to enforce physical consistency, we evaluate the physics residual error on the output x=dθ′(ε), i.e.,∥R(x)∥2. The overall training objective is: Ltotal=LPDE+λLsample =E(ϵ,x0)∼Dh ∥dθ′(ϵ)−x0∥2i +λtrain∥R(x)∥2, (8) where λtrainis a tunable hyperparameter that balances generative fidelity and physical constraint satisfaction. The optimization is repeated until convergence, as described in Algorithm 1. The most relevant prior work is vanilla knowledge distillation [ 43], which trains a student model to map noise to data using pairs sampled via deterministic samplers from a teacher diffusion model. However, this mapping is difficult to learn due to the high curvature of sampling trajectories, which produces noise–data pairs that are distant in Euclidean space [ 39]. To address this, we adopt a linear diffusion 7 Table 1: Generative metrics on various PDE problems. The PDE error means the MSE of the evaluated physics residual error. The best results are in bold and the second best are underlined . Dataset Metric PIDDM-1 PIDDM-ref ECI DiffusionPDE D-Flow PIDM Vanilla DarcyMMSE ( ×10−2) 0.112 0.037 0.153 0.419 0.129 0.515 0.108 SMSE ( ×10−2) 0.082 0.002 0.103 0.163 0.085 0.368 0.069 PDE Error ( ×10−4) 0.226 0.148 1.582 1.071 0.532 1.236 1.585 NFE ( ×103) 0.001 0.080 0.500 0.100 5.000 0.100 0.100 PoissonMMSE ( ×10−2) 0.162 0.113 0.183 0.861 0.172 0.948 0.150 SMSE ( ×10−2) 0.326 0.274 0.291 0.483 0.475 0.701 0.353 PDE Error ( ×10−9) 0.073 0.050 2.420 1.270 0.831 1.593 2.443 NFE ( ×103) 0.001 0.080 0.500 0.100 5.000 0.100 0.100 BurgerMMSE ( ×10−2) 0.152 0.012 0.294 0.064 0.305 0.948 0.264 SMSE ( ×10−2) 0.133 0.101 0.105 0.103 0.207 0.701 0.114 PDE Error ( ×10−3) 0.466 0.174 1.572 1.032 0.730 1.593 1.334 NFE ( ×103) 0.001 0.080 0.500 0.100 5.000 0.100 0.100 process [ 36,38], which yields smoother trajectories and enhances learnability. Beyond this setup, we further evaluate more advanced distillation strategies, including Rectified Flow [ 39] and Distribution Matching Distillation [ 63], which improve coupling and distribution alignment, respectively (see Table 3 for more details). 4.3 Downstream Tasks Our method naturally supports one-step generation of physically-constrained data, jointly producing both coefficient and solution fields. Beyond this intrinsic functionality, it also retains the flexibility of the teacher diffusion model, enabling various downstream tasks such as forward and inverse problem solving, and reconstruction from partial observations. Compared to the teacher model, our method achieves these capabilities with improved computational efficiency and stronger physical alignment. Generative Modeling. In our setting, generative modeling aims to sample physically consistent pairsx0= (u,a)—representing solution and
https://arxiv.org/abs/2505.22391v1
coefficient fields—from a learned distribution that satisfies the underlying PDE system. Our student model inherently supports this task through efficient one-step generation: given a latent variable ϵ∼ N (0,I), the model directly outputs a sample x0=dθ′(ϵ)that approximates a valid solution–coefficient pair. Beyond this default mode, we introduce an optional refinement stage based on constraint-driven optimization (Algorithm 2), which further reduces the physics residual by updating ϵvia gradient descent. This strategy is inspired by noise prompting techniques [ 4,16], which optimize the final generated sample with respect to the initial noise. However, unlike those methods—which require backpropagation through the entire sampling trajectory and thus suffer from high computational cost and issues like gradient vanishing or explosion—our refinement operates efficiently in a one-step setting. While optional, this mechanism offers an additional degree of control, which is particularly valuable in scientific applications that demand strict physical consistency [33, 19, 46, 52]. Forward/Inverse Problem and Reconstruction. PIDDM handles all downstream problems as conditional generation over the joint field x= (u,a). Forward inference draws ufrom known a; inverse inference recovers afrom observed u; reconstruction fills in missing entries of (u,a)given a partial observation x′. We solve this via optimization-based inference on the latent variable ε, using the same student model dθ′as in generation, as described in Algorithm. 3.. Let x=dθ′(ε) denote the generated sample, and let Mbe a binary observation mask indicating the known entries in x′with respect to x. To ensure hard consistency with observed values (e.g., boundary conditions B), we define a mixed sample by injecting observed entries into the generated output, following ECI-sampling [8] and then update εby descending the gradient of a combined objective: Ltotal=∥(x−x′)⊙M∥2+λ∥R(xmix)∥2,xmix=x′⊙M+x⊙(1−M). (9) Interestingly, we also find that applying this masking not only enhances hard constraints on B, but also improves satisfaction of F, as demonstrated in our ablation study in Table 3. Classical inverse 8 solvers [ 34,35,42,49] learn a deterministic map u7→aand therefore require full observations of ato evaluate F[u,a] = 0 , a condition rarely met in practice. DiffusionPDE [ 23] relaxes this by sampling missing variables, but enforces physics on the posterior mean, i.e. F E[x0|xt] , and thus suffers from the Jensen’s Gap. Our method avoids this inconsistency by imposing constraints directly on the final sample F[x0], yielding more reliable and physically consistent inverse solutions. 5 Experiments Experiment Setup. We consider three widely used PDE benchmarks in main text: Darcy flow, Poisson equation, and Burger’s equation. Each dataset contains paired solution and coefficient fields defined on a 128×128grid. All of these data are readily accessible from FNO [ 34] and DiffusionPDE [ 23] . We also provide results on other benchmarks in Appendix. D. We consider ECI [8], DiuffsionPDE [ 23], D-Flow [ 8,4], PIDM [ 3] and vanilla diffusion models as baseline methods, where we put the detailed implementation in Appendix. C.4. We follow ECI-sampling [ 8] to use FNO as both of the teacher diffusion models and the student distillation model. We put full specification of our experiment setup in Appendix. C. To quantitatively evaluate performance, we report MMSE and SMSE following prior
https://arxiv.org/abs/2505.22391v1
work [ 8,29]: MMSE measures the mean squared error of the sample mean; SMSE evaluates the error of the sample standard deviation, reflecting the quality of distribution modeling. PDE Error quantifies the violation of physical constraints using the physics residual error |R(x)|2. The number of function evaluations (NFE) reflects computational cost during inference. For downstream tasks, we additionally report MSE on solution, or coefficient fields, or both of them, depending on the setting. 5.1 Empirical Evaluations PIDDM samples the joint field (u,a), enabling forward ( u|a), inverse ( a|u), and reconstruction (partial u,a) tasks (Sec. 4.3). DiffusionPDE [ 23] reports only reconstruction MSE, while ECI- sampling [ 8] and PIDM [ 3] cover at most one task, limited to either unconditional generation or forward solving. For a fair comparison, we evaluate all methods on all three tasks using both reconstruction error and PDE residual, providing a unified view of generative quality and physical fidelity. Generative Tasks. We first evaluate the generative performance of our method across three repre- sentative PDE systems: Darcy, Poisson, and Burgers’ equations. As shown in Table 1, our one-step model ( PIDDM-1 ) achieves competitive MMSE and SMSE scores while maintaining extremely low computational cost (1 NFE). Notably, PIDDM-1 already surpasses all prior methods that incorporate physical constraints during training or sampling , such as PIDM and DiffusionPDE, ECI-sampling, which suffer from the Jensen’s Gap and only exhibit marginal improvements over vanilla diffusion baselines. Our optional refinement stage (PIDDM-ref) further reduces both statistical errors and physical PDE residuals, outperforming all baselines. Meanwhile, ECI—which only enforces hard constraints on boundary conditions—achieves moderate improvements but remains less effective on field-level physical consistency. Although D-Flow theoretically enforces physical constraints throughout the trajectory, it requires thousands of NFEs and often suffers from gradient instability. Forward/Inverse Solving and Reconstruction. We further demonstrate the versatility of our method in forward and inverse problem solving on the Darcy dataset. Since the original PIDM [ 3] implementation addresses only unconditional generation, we pair it with Diffusion Posterior Sampling (DPS) [ 9] to extend it to downstream tasks (forward, inverse, and reconstruction). Following the test protocol of D-Flow, we apply inference-time optimization over the initial noise to match given observations while satisfying physical laws. As shown in Table 2, our method ( PIDDM ) achieves the best results across all metrics —including MSE and PDE error—while being significantly more efficient than D-Flow, which requires 5000 function evaluations. Compared to ECI and DiffusionPDE, our method yields lower residuals and better predictive accuracy, reflecting its superior handling of physical and observational constraints jointly. 5.2 Ablation Studies To better understand the effect of key design choices in PIDDM, we perform ablations on five factors: teacher sampling steps Ns; distillation weight λtrain; inference weight λinfer; diffusion schedule (VP, 9 Table 2: Evaluation on various downstream tasks on Darcy Datasets. The PDE error means the MSE of the evaluated physics residual error. The best results are in bold . Task Metric PIDDM ECI DiffusionPDE D-Flow PIDM ForwardMSE ( ×10−1) 0.316 0.776 0.691 0.539 0.380 PDE Error ( ×10−4)0.145 1.573 1.576 0.584 1.248
https://arxiv.org/abs/2505.22391v1
NFE ( ×103) 0.080 0.500 0.100 5.000 0.100 InverseMSE ( ×10−1) 0.236 0.545 0.456 0.428 0.468 PDE Error ( ×10−4)0.126 1.505 1.402 0.438 1.113 NFE ( ×103) 0.080 0.500 0.100 5.000 0.100 ReconstructCoef MSE ( ×10−1)0.128 0.395 0.240 0.158 0.179 Sol MSE ( ×10−1) 0.102 0.219 0.143 0.125 0.147 PDE Error ( ×10−4)0.143 1.205 1.239 0.605 1.240 NFE ( ×103) 0.080 0.500 0.100 5.000 0.100 (a) (b) (c) Figure 3: Ablation studies on the effect of several factors on the performance of PIDDM on Darcy dataset. (a), (b) and (c) refer to the effect of the Ns,λtrainandλinfer, respectively. sub-VP, linear); and advanced distillation variants (Rectified Flow, DMD). Figure 3 presents three key ablation studies on the Darcy dataset. Panel (a) shows that increasing the teacher model’s sampling steps Nsconsistently improves both generative quality and physical alignment, as reflected by lower MMSE, SMSE, and PDE residuals of the distilled student—highlighting the importance of high-fidelity supervision. Panels (b) and (c) examine the impact of the PDE loss weight during distillation and inference, respectively. A moderate weight significantly reduces the PDE residual, but an overly large weight leads to a slight degradation in MMSE and SMSE. This trade-off arises from the numerical approximation errors introduced by finite difference discretizations. Overall, these results suggest that while stronger PDE guidance improves physical consistency, excessively large weights can harm statistical fidelity due to discretization-induced bias. These ablation studies show potential limitations such as reliance on a pre-trained teacher model, and sensitivity to the choice of PDE loss weight. We also explore whether more sophisticated distillation strategies can improve the quality of the student model. As shown in Table 3, advanced techniques such as Rectified Flow (RF-1, RF-2) and Distribution Matching Distillation (DMD) yield better MMSE and SMSE than the vanilla baseline, while maintaining competitive PDE residuals. This suggests that better coupling of noise and data trajectories in distillation can reduce the trade-off between sample quality and constraint alignment. Notably, RF-2 achieves the best overall performance across forward and inverse tasks. Besides, we analyze the effect of imposing hard constraints during downstream inference. Following the strategy inspired by ECI-sampling, we directly replace the masked entries in the generated sample with observed values before computing the PDE residual. This ensures that the known information is preserved when evaluating physical consistency. As shown in Table 3, applying this hard constraint significantly improves PDE residual across all tasks. We also validate our design on using linear diffusion process, 10 Table 3: Generative metrics on various downstream tasks on Darcy Datasets using Different Dis- tillation. PIDDM: Our method, PIDDM RF-1: reflowing [ 39] for once, PIDDM RF-2: reflowing [ 39] for twice, and PIDDM DMD: [63].PIDDM wo HC refers to the ablation study when not using hard- constraints 4.3 in PIDDM inference. PIDDM VPandPIDDM sub-VP refers the ablation study on vp and sub-vp diffusion process. The PDE error means the MSE of the evaluated physics residual error. The best results are in bold . Dataset Metric PIDDM PIDDM RF-1 PIDDM RF-2 PIDDM DMD PIDDM wo hc PIDDM VPPIDDM sub-VP ForwardMSE ( ×10−1) 0.316
https://arxiv.org/abs/2505.22391v1
0.278 0.127 0.255 0.705 0.398 0.372 PDE Error ( ×10−4) 0.145 0.129 0.098 0.134 0.354 0.154 0.157 Inverse MSE ( ×10−1) 0.236 0.195 0.136 0.188 0.503 0.284 0.271 PDE Error ( ×10−4) 0.115 0.126 0.079 0.121 0.321 0.143 0.139 ReconstructCoef MSE ( ×10−1) 0.128 0.107 0.913 0.954 0.294 0.133 0.138 Sol MSE ( ×10−1) 0.102 0.084 0.063 0.073 0.239 0.127 0.119 PDE Error ( ×10−4) 0.143 0.118 0.085 0.104 0.464 0.159 0.158 6 Conclusion and Future Work Method. We introduce PIDDM , a lightweight yet effective post-hoc distillation framework that enables diffusion models for physics-constrained generation. Concretely, we first train a standard diffusion model and then distill it into a student model by directly enforcing PDE constraints on the final output. In contrast to existing methods that impose constraints on the posterior mean E[x0|xt], leading to a mismatch known as the Jensen’s Gap which leads to a trade-off between generative quality and constraint satisfaction, PIDDM applies constraints on the actual sample x0, ensuring physics consistency without sacrificing distributional fidelity. Empirical Findings. We provide the first empirical illustrations of the Jensen’s Gap in both diffusion training and sampling, demonstrating its impact on constraint satisfaction. Our experiments show that PIDDM improves physical fidelity in downstream tasks such as forward, inverse, and partial reconstruction problems. Moreover, the student model enables one-step physics simulation, achieving substantial improvements in efficiency while maintaining high accuracy. Limitations and Future Work. Our approach assumes access to a well-trained teacher model and a reliable PDE residual operator, which can be challenging to construct, especially when using coarse or low-accuracy finite difference schemes. Additionally, although the one-step student model enables fast inference, its performance may degrade if the teacher model is poorly calibrated or fails to capture sufficient trajectory diversity. To address this limitation, we are interested in incorporating physical information into other aspects of the modern generative models, such as designing physics-informed tokenizers [ 31,20,64,7], i.e., the encoder that maps the data into an embedding space that could be easier to impose PDE constraints. Besides, we will also explore the theory of the generative under strict PDE constraints. Given the prior works demonstrating the ineffectiveness of the diffusion model for rule-based generation [ 18], it would be interesting to further prove whether post-hoc distillation can help address or improve the generation performance under strict constraints. References [1]Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. {TensorFlow }: a system for {Large-Scale }machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16) , pages 265–283, 2016. [2]Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications , 12(3):313–326, 1982. [3]Jan-Hendrik Bastek, WaiChing Sun, and Dennis Kochmann. Physics-informed diffusion models. InThe Thirteenth International Conference on Learning Representations , 2025. [4]Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, and Yaron Lipman. D-flow: Differentiating through flows for controlled generation. arXiv preprint arXiv:2402.14017 , 2024. 11 [5] David Berthelot, Arnaud Autef, Jierui Lin, Dian Ang Yap, Shuangfei Zhai, Siyuan Hu, Daniel Zheng, Walter Talbott, and
https://arxiv.org/abs/2505.22391v1
Eric Gu. Tract: Denoising diffusion models with transitive closure time-distillation. arXiv preprint arXiv:2303.04248 , 2023. [6]Changgu Chen, Libing Yang, Xiaoyan Yang, Lianggangxu Chen, Gaoqi He, Changbo Wang, and Yang Li. Find: Fine-tuning initial noise distribution with policy optimization for diffusion models. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 6735–6744, 2024. [7]Hao Chen, Yujin Han, Fangyi Chen, Xiang Li, Yidong Wang, Jindong Wang, Ze Wang, Zicheng Liu, Difan Zou, and Bhiksha Raj. Masked autoencoders are effective tokenizers for diffusion models. arXiv preprint arXiv:2502.03444 , 2025. [8]Chaoran Cheng, Boran Han, Danielle C. Maddix, Abdul Fatir Ansari, Andrew Stuart, Michael W. Mahoney, and Bernie Wang. Gradient-free generation for hard-constrained systems. In The Thirteenth International Conference on Learning Representations , 2025. [9]Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye. Diffu- sion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687 , 2022. [10] John Crank. The Mathematics of Diffusion . Oxford University Press, 2 edition, 1975. [11] Peter A. Davidson. Turbulence: An Introduction for Scientists and Engineers . Oxford University Press, 2015. [12] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation , 7(5):889–904, 1995. [13] Bradley Efron. Tweedie’s formula and selection bias. Journal of the American Statistical Association , 106(496):1602–1614, 2011. [14] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transform- ers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning , 2024. [15] Xiang Gao, Meera Sitharam, and Adrian E Roitberg. Bounds on the jensen gap, and implications for mean-concentrated distributions. arXiv preprint arXiv:1712.05267 , 2017. [16] Xiefan Guo, Jinlin Liu, Miaomiao Cui, Jiankai Li, Hongyu Yang, and Di Huang. Initno: Boosting text-to-image diffusion models via initial noise optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9380–9389, 2024. [17] Andi Han, Wei Huang, Yuan Cao, and Difan Zou. On the feature learning in diffusion models. arXiv preprint arXiv:2412.01021 , 2024. [18] Yujin Han, Andi Han, Wei Huang, Chaochao Lu, and Difan Zou. Can diffusion models learn hidden inter-feature rules behind images? arXiv preprint arXiv:2502.04725 , 2025. [19] Derek Hansen, Danielle C Maddix, Shima Alizadeh, Gaurav Gupta, and Michael W Mahoney. Learning physical models that can respect conservation laws. In International Conference on Machine Learning , pages 12469–12510. PMLR, 2023. [20] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science , 313(5786):504–507, 2006. [21] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. [22] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance, 2022. [23] Jiahe Huang, Guandao Yang, Zichen Wang, and Jeong Joon Park. DiffusionPDE: Generative PDE-solving under partial observation. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. 12 [24] Thomas JR Hughes. The finite element method: linear static and dynamic finite element analysis . Courier Corporation, 2003. [25] Frank P. Incropera, David P. DeWitt, Theodore L.
https://arxiv.org/abs/2505.22391v1
Bergman, and Adrienne S. Lavine. Funda- mentals of Heat and Mass Transfer . John Wiley & Sons, 7 edition, 2011. [26] John David Jackson. Classical Electrodynamics . John Wiley & Sons, 3 edition, 1998. [27] Christian Jacobsen, Yilin Zhuang, and Karthik Duraisamy. Cocogen: Physically-consistent and conditioned score-based generative models for forward and inverse problems, 2024. [28] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems , 35:26565–26577, 2022. [29] Gavin Kerrigan, Giosue Migliorini, and Padhraic Smyth. Functional flow matching. arXiv preprint arXiv:2305.17209 , 2023. [30] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [31] Diederik P Kingma, Max Welling, et al. Auto-encoding variational bayes, 2013. [32] Randall J LeVeque. Finite difference methods for ordinary and partial differential equations: steady-state and time-dependent problems . SIAM, 2007. [33] Randall J LeVeque and Randall J Leveque. Numerical methods for conservation laws , volume 132. Springer, 1992. [34] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differen- tial equations. arXiv preprint arXiv:2010.08895 , 2020. [35] Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. ACM/JMS Journal of Data Science , 1(3):1–27, 2024. [36] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations , 2023. [37] Xingchao Liu, Chengyue Gong, and qiang liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In The Eleventh International Conference on Learning Representations , 2023. [38] Xingchao Liu, Chengyue Gong, and qiang liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In The Eleventh International Conference on Learning Representations , 2023. [39] Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, et al. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation. In The Twelfth International Conference on Learning Representations , 2023. [40] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems , 35:5775–5787, 2022. [41] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm- solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095 , 2022. [42] Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193 , 2019. 13 [43] Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388 , 2021. [44] Jiafeng Mao, Xueting Wang, and Kiyoharu Aizawa. The lottery ticket hypothesis in denoising: Towards semantic-driven initialization. In European Conference on Computer Vision , pages 93–109. Springer, 2024. [45] Dimitra Maoutsa, Sebastian Reich, and Manfred Opper. Interacting particle solutions of fokker–planck
https://arxiv.org/abs/2505.22391v1
equations through gradient–log–density estimation. Entropy , 22(8):802, 2020. [46] S Chandra Mouli, Danielle C Maddix, Shima Alizadeh, Gaurav Gupta, Andrew Stuart, Michael W Mahoney, and Yuyang Wang. Using uncertainty quantification to characterize and improve out-of-domain learning for pdes. arXiv preprint arXiv:2403.10642 , 2024. [47] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International conference on machine learning , pages 8162–8171. PMLR, 2021. [48] A Paszke. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703 , 2019. [49] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics , 378:686–707, 2019. [50] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning , pages 1278–1286. PMLR, 2014. [51] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. [52] Nadim Saad, Gaurav Gupta, Shima Alizadeh, and Danielle C Maddix. Guiding continuous operator learning through physics-based boundary constraints. arXiv preprint arXiv:2212.07477 , 2022. [53] Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, and Nima Anari. Parallel sampling of diffusion models. Advances in Neural Information Processing Systems , 36, 2024. [54] Dule Shu, Zijie Li, and Amir Barati Farimani. A physics-informed diffusion model for high- fidelity flow field reconstruction. Journal of Computational Physics , 478:111972, 2023. [55] Gordon D Smith. Numerical solution of partial differential equations: finite difference methods . Oxford university press, 1985. [56] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations , 2021. [57] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models, 2023. [58] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations , 2020. [59] Stephen P. Timoshenko and James N. Goodier. Theory of Elasticity . McGraw-Hill, 3 edition, 1970. [60] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. [61] Hao Wang, Weihua Chen, Chenming Li, Wenjian Huang, Jingkai Zhou, Fan Wang, and Jianguo Zhang. Noise re-sampling for high fidelity image generation, 2025. 14 [62] Ruoyu Wang, Huayang Huang, Ye Zhu, Olga Russakovsky, and Yu Wu. The silent prompt: Initial noise as implicit guidance for goal-driven image generation. arXiv preprint arXiv:2412.05101 , 2024. [63] Tianwei Yin, Michaël Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6613–6623, 2024. [64] Sihyun Yu, Sangkyung Kwak, Huiwon Jang, Jongheon Jeong, Jonathan Huang, Jinwoo Shin, and Saining Xie. Representation alignment for generation: Training diffusion transformers is easier than you think. arXiv preprint arXiv:2410.06940 ,
https://arxiv.org/abs/2505.22391v1
2024. [65] Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, and Anima Anandkumar. Fast sampling of diffusion models via operator learning. In International conference on machine learning , pages 42390–42402. PMLR, 2023. [66] Zhenyu Zhou, Defang Chen, Can Wang, and Chun Chen. Fast ode-based sampling for diffusion models in around 5 steps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7777–7786, 2024. [67] Zikai Zhou, Shitong Shao, Lichen Bai, Zhiqiang Xu, Bo Han, and Zeke Xie. Golden noise for diffusion models: A learning framework. arXiv preprint arXiv:2411.09502 , 2024. 15 A Mixture-of-Gaussians (MoG) Dataset To study the sampling-time behavior of constrained diffusion models, we design a synthetic 2D Mixture-of-Gaussians (MoG) dataset with analytical score functions. Each sample x= (x1, x2)∈R2 consists of a data dimension x1and a fixed latent code x2that serves as a hard constraint. Specifically, we define a mixture model where x1is sampled from a Gaussian mixture conditioned on the latent code z∈ {− 1,+1}, and x2is deterministically set to z. The full distribution is: x2=z∈ {− 1,+1}, x 1∼ N(µz, σ2), (10) withµ−1=−1,µ+1= +1 , and fixed variance σ2= 0.1. The full 2D data point is thus given by: x= x1 x2 ,withx1∼ N(µx2, σ2), x 2∈ {− 1,+1}. (11) The resulting joint density p(x)is a mixture of two Gaussians supported on parallel horizontal lines: p(x) =1 2N(x1;−1, σ2)·δ(x2+ 1) +1 2N(x1; +1, σ2)·δ(x2−1), (12) where δ(·)denotes the Dirac delta function. In our experiment comparing DPS in Sec. 3.3, we tune the weight of DPS guidance to be 0.035, since it gives satisfying performance. A.1 Derivation of Score Function of the MoG Dataset Note that for any MoG, they provide analytical solution of diffusion objectives. In specific, if we consider a MoG with the form: x0∼1 KKX k=1N(µk, σ2 k·I), where Kis the number of Gaussian components, µkandσ2 kare the means and variances of the Gaussian components, respectively. Suppose the solution of the diffusin process follows: xt=αtx0+σt·ξwhere ξ∼ N(0,I). Sincex0andξare both sampled from Gaussian distributions, their linear combination xtalso forms a Gaussian distribution, i.e., xt∼1 KKX k=1N(αtµk,(σ2 kα2 t+σ2 t)·I). Then, we have ∇pt(xt) =1 KKX i=1∇xt1 2(1√ 2πσ2 iα2 t+σ2 t)·exp(−1 2(xt−µiαt σ2 iα2 t+σ2 t)2) =1 KKX i=1pi(xt)· ∇xt −1 2(xt−µiαt σ2 kα2 t+σ2 t)2 =1 KKX i=1pi(xt)·−(xt−µiαt) σ2 kα2 t+σ2 t. We can also calculate the score of xt, i.e., ∇logpt(xt) =∇pt(xt) pt(xt)=1/K·PK i=1pi(xt)· −(xt−µiαt) σ2 kα2 t+σ2 t 1/K·PK i=1pi(xt). 16 A.2 Deviation of Velocity Field of Reverse ODE and DPS Diffusion models define a forward diffusion process to perturb the data distribution pdata to a Gaussian distribution. Formally, the diffusion process is an Itô SDE dxt=f(xt) +g(t)dw, where dwis the Brownian motion and tflows forward from 0toT. The solution of this diffusion process gives a transition distribution pt(xt|x0) =N(xt|αtx0, σ2 tI), where αt=eRt 0f(s)dsand σ2 t= 1−e−Rt 0g(s)2ds. Specifically in linear diffusion process, αt=t, and βt= 1−t. To sample from the diffusion model, a typical approach is to apply a reverse-time SDE which reverses the diffusion process [2]: dxt= [f(xt)−g(t)2∇xtlogpt(xt)]dt+ d¯w, where d¯wis the Brownian motion and tflows
https://arxiv.org/abs/2505.22391v1
forward from Tto0. For all reverse-time SDE, there exists corresponding deterministic processes which share the same density evolution, i.e., {pt(xt)}T t=0 [58]. In specific, this deterministic process follows an ODE: dxt= [f(xt)−1 2g(t)2∇xtlogpt(xt)]dt, where tflows backwards from Tto0. The deterministic process defines a velocity field, vGT(x, t) = [f(xt)−1 2g(t)2∇xtlogpt(xt)]. Here, we also define the velocity field by v(xt, t) =f(xt)−1 2g(t)2∇xtlogpt(xt). The posterior mean can be estimated from score by: E[x0|xt]) =xt+σ2 t∇logpt(xt) αt. The posterior mean can be also estimated from velocity field by: E[x0|xt]) =˙σxt−σtv(xt, t) αt˙σt+σt˙αt. B Datasets B.1 Darcy Flow We adopt the Darcy Flow setup introduced in DiffusionPDE [ 23] and the dataset is released from FNO [ 34]. For completeness, we describe the generation process here. In specific, we consider the steady-state Darcy flow equation on a 2D rectangular domain Ω⊂R2with no-slip boundary conditions: −∇ · (a(c)∇u(c)) =q(c), c∈Ω, u(c) = 0 , c∈∂Ω. Here, a(c)is the spatially varying permeability field with binary values, and q(c)is set to 1 for constant forcing. The (u, a)is jointly modeled by diffusion model. B.2 Inhomogeneous Helmholtz Equation and Poisson Equation We adopt the Darcy Flow setup introduced in DiffusionPDE [ 23] and the dataset is released from FNO [ 34]. For completeness, we describe the generation process here. As a special case of the inhomogeneous Helmholtz equation, the Poisson equation is obtained by setting k= 0: ∇2u(c) =a(c), c∈Ω,=u(c) = 0 , c∈∂Ω. Here, a(c)is a piecewise constant forcing function. The (u, a)is jointly modeled by diffusion model. B.3 Burgers’ Equation We adopt the Darcy Flow setup introduced in DiffusionPDE [ 23] and the dataset is released from FNO [ 34]. For completeness, we describe the generation process here. We study the 1D viscous 17 Burgers’ equation with periodic boundary conditions on a spatial domain Ω = (0 ,1)and temporal domain τ∈(0, T]: ∂τu(c, τ) +∂cu2(c, τ) 2 =ν∂2 ccu(c, τ), u(c,0) = a(c), c∈Ω. In our experiments, we set ν= 0.01. Specifically, we use 128 temporal steps, where each trajectory has shape 128×128. The (u, a)is jointly modeled by diffusion model. B.4 Stokes Problem We adopt the Stokes problem setup introduced in ECI-Sampling [ 8] and use their released generation code. For completeness, we describe the generation process below. The 1D Stokes problem is governed by the heat equation: ut=νuxx, x∈[0,1], t∈[0,1], with the following boundary and initial conditions: u(x,0) = Ae−kxcos(kx), x∈[0,1], a(t) :=u(0, t) =Acos(ωt), t∈[0,1], where ν≥0is the viscosity, A >0is the amplitude, ωis the oscillation frequency, and k=p ω/(2ν) controls the spatial decay. The analytical solution is given by: uexact(x, t) =Ae−kxcos(kx−ωt). In our experiments, we set A= 2,k= 5and take a:=ω∼ U[2,8]as the coefficient field to jointly model with u. B.5 Heat Equation We adopt the heat equation setup introduced in ECI-Sampling [ 8] and use their released generation code. For completeness, we describe the generation process below. The 1D heat (diffusion) equation with periodic boundary conditions is defined as: ut=αuxx, x∈[0,2π], t∈[0,1], with the initial and boundary conditions: a(x) :=u(x,0) = sin( x+φ), a(t) :=u(0, t) =u(2π, t). Here, αdenotes the
https://arxiv.org/abs/2505.22391v1
diffusion coefficient and φcontrols the phase of the sinusoidal initial condition. The exact solution is: uexact(x, t) =e−αtsin(x+φ). In our experiments, we set α= 3and take a:=φ∼ U[0, π]as the coefficient to jointly model with u. B.6 Navier–Stokes Equation We adopt the 2D Navier–Stokes (NS) setup from ECI-Sampling [ 8] and use their released generation code. The NS equation in vorticity form for an incompressible fluid with periodic boundary conditions is given as: ∂tw(x, t) +u(x, t)· ∇w(x, t) =ν∆w(x, t) +f(x), x∈[0,1]2, t∈[0, T], ∇ ·u(x, t) = 0 , x∈[0,1]2, t∈[0, T], w(x,0) = w0(x), x∈[0,1]2. Here, udenotes the velocity field and w=∇ ×uis the vorticity. The initial vorticity w0is sampled fromN(0,73/2(−∆ + 49 I)−5/2), and the forcing term is defined as f(x) = 0 .1√ 2 sin(2 π(x1+ x2) +ϕ), where ϕ∼U[0, π/2]. We take a:=w0as the coefficient to jointly model with u. 18 B.7 Porous Medium Equation We use the Porous Medium Equation (PME) setup provided by ECI-Sampling [ 8], with zero initial and time-varying Dirichlet left boundary conditions: ut=∇ ·(um∇u), x∈[0,1], t∈[0,1], (x,0) = 0 , x∈[0,1], u(0, t) = (mt)1/m, t∈[0,1], u(1, t) = 0 , t∈[0,1]. The exact solution is uexact(x, t) = (m·ReLU (t−x))1/m. The exponent mis sampled from U[1,5]. We take a:=mas the coefficient to jointly model with u. B.8 Stefan Problem We also adopt the Stefan problem configuration from ECI-Sampling [ 8], which is a nonlinear case of the Generalized Porous Medium Equation (GPME) with fixed Dirichlet boundary conditions: ut=∇ ·(k(u)∇u), x∈[0,1], t∈[0, T], a(x,0) := u(x,0) = 0 , x∈[0,1], a(0, t) :=u(0, t) = 1 , t∈[0, T], a(1, t) :=u(1, t) = 0 , t∈[0, T], where k(u)is a step function defined by a shock value u∗: k(u) =1, u≥u∗, 0, u < u∗. The exact solution is: uexact(x, t) =⊮[u≥u∗] 1−(1−u∗)erf(x/(2√ t)) erf(α) , where αsatisfies the nonlinear equation (1−u∗)/√π=u∗erf(α)αexp(α2). We follow ECI- Sampling to take a:=u∗∼U[0.55,0.7]as the coefficient to jointly model with u. C Experimental Setup This section provides details on the model architecture, training configurations for diffusion and distillation, evaluation protocols, and baseline methods. C.1 Model Structure We follow ECI-sampling [ 8] and adopt the Fourier Neural Operator (FNO) [ 34] as both the teacher diffusion model and the student distillation model. A sinusoidal positional encoding [ 60] is appended as an additional input dimension. Specifically, we use a four-layer FNO with a frequency cutoff of 32×32, a time embedding dimension of 32, a hidden channel width of 64, and a projection dimension of 256. C.2 Diffusion and Distillation Training Setup For diffusion training, we employ a standard linear noise schedule [ 38,37,36,39] with a batch size of 128 and a total of 10,000 iterations. The model is optimized using Adam [ 30]with a learning rate of3×10−2. During distillation, we use Euler’s method with 100 uniformly spaced timesteps from t= 1tot= 0 for sampling. Every 100 epochs, we resample 1024 new noise–data pairs for supervision. Distillation is trained for 2000 epochs using Adam (learning rate 3×10−2), with early stopping based on the squared norm of the
https://arxiv.org/abs/2505.22391v1
observation loss, i.e., ∥sθ′(ε)−x∥2. 19 The physics constraint weight λtrainis set to 10 for Darcy Flow, Burgers’ Equation, Stokes Flow, Heat Equation, Navier–Stokes, Porous Medium Equation, and Stefan Problem. For Helmholtz and Poisson equations, we increase λtrainto106due to the stiffness of these PDEs. All experiments are conducted on an NVIDIA RTX 4090 GPU. C.3 Evaluation Setup For physics-based data simulation, we evaluate models with and without physics refinement: the number of gradient-based refinement steps Nis set to 0 or 50. The step size ηis aligned with the dataset-specific λtrainused during distillation. In forward and inverse problems, the observation mask Mdefines the known entries. For forward problems, the mask has ones at boundary entries. For partial reconstruction, the mask is sampled randomly with 20% of entries set to 1 (observed), and the rest to 0 (missing). All evaluations are conducted on an NVIDIA RTX 4090 GPU. C.4 Baseline Methods We describe the configurations of all baseline methods used for comparison. Where necessary, we adapt our diffusion training and sampling codebase to implement their respective constraint mechanisms. ECI-sampling. We follow the approach of directly substituting hard constraints onto the posterior meanE[x0|xt]based on a predefined observation mask. Specifically, we project these constraints at each DDIM step [56] using a correction operator C: xt−dt=C(ˆxθ(xt, t))·(1−t+ dt) + (xt−ˆxθ(xt, t))·(t−dt), (13) where tflows backward from 1 to 0, and ˆxθdenotes the posterior mean estimated using Tweedie’s formula. DiffusionPDE. This method employs diffusion posterior sampling (DPS) [ 9], where each intermediate sample xtis guided by the gradient of the PDE residual evaluated on the posterior mean: xt−dt=xt+vθ(xt, t)·dt−ηt∇xt∥R(ˆxθ(xt, t))∥2, (14) where vθ(xt, t)is the learned velocity field from the reverse-time ODE sampler, and ηtis a hyperpa- rameter. In our experiments, we set ηtequal to λtrainfor each dataset. PIDM. This method incorporates an additional residual loss into the diffusion training objective, evaluated on the posterior mean E[x0|xt]. Specifically, PIDM [ 3] augments the standard diffusion loss with a physics-based term: LPIDM=Ldiffusion +λt∥R(ˆxθ(xt, t))∥2, (15) where Ldiffusion is the original diffusion training loss, and λtis the residual loss weight. In our experiment, we set λtto be λtrainfor each dataset since it gives satisfying performance. D-Flow. For this standard method [ 4], We build on the official implementation of ECI-sampling [8] and introduce an additional PDE residual loss evaluated on the final sample. The weighting λtrainis aligned with our setup across datasets. Specifically, the implementation follows the D-Flow setup in ECI-sampling [ 8]: we discretize the sampling trajectory into 100 denoising steps and perform gradient-based optimization on the input noise over 50 iterations to minimize the physics residual loss. At each iteration, gradients are backpropagated through the entire 100-step trajectory, resulting in a total of 50,000 function evaluations (NFE) per sample. This leads to significantly higher computational cost compared to our one-step method. Vanilla. This baseline refers to sampling directly from the trained teacher diffusion model without incorporating any PDE-based constraint or guidance mechanism. 20 D Generative Evaluations on More Datasets In this section, we include the performance of our results on more datasets and comparisons to other baseline methods, as
https://arxiv.org/abs/2505.22391v1
shown in Table. 4. PIDDM marginally surpass all baseline especially in the physics residual error. Table 4: Generative metrics on various constrained PDEs. The PDE error means the MSE of the evaluated physics residual error. The best results are in bold . Dataset Metric PIDDM-1 PIDDM-ref ECI DiffusionPDE D-Flow PIDM FM HelmholtzMMSE ( ×10−1) 0.265 0.185 0.318 0.335 0.140 0.352 0.296 SMSE ( ×10−1) 0.195 0.169 0.289 0.301 0.106 0.325 0.210 PDE Error ( ×10−9) 0.054 0.034 2.135 1.812 0.680 1.142 2.104 NFE ( ×103) 0.001 0.100 0.500 0.100 5.000 0.100 0.100 Stoke’s ProblemMMSE ( ×10−2) 0.298 0.182 0.335 0.342 0.301 0.361 0.310 SMSE ( ×10−2) 0.425 0.312 0.455 0.469 0.441 0.484 0.430 PDE Error ( ×10−3) 0.241 0.194 0.585 0.498 0.318 0.432 0.578 NFE ( ×103) 0.001 0.100 0.500 0.100 5.000 0.100 0.100 Heat EquationMMSE ( ×10−3) 0.901 0.845 4.620 4.600 1.452 4.580 4.544 SMSE ( ×10−2) 0.816 0.790 1.612 1.598 0.892 1.587 1.565 PDE Error ( ×10−5) 3.265 2.910 4.120 4.100 3.698 4.150 4.354 NFE ( ×103) 0.001 0.100 0.500 0.100 5.000 0.100 0.100 Navier– Stokes EquationMMSE ( ×10−4) 0.285 0.264 0.302 0.299 0.288 0.306 0.294 SMSE ( ×10−4) 0.218 0.210 0.323 0.321 0.225 0.327 0.314 PDE Error ( ×10−5) 3.184 2.945 6.910 6.740 3.200 6.950 7.222 NFE ( ×103) 0.001 0.100 0.500 0.100 5.000 0.100 0.100 Porous Medium EquationMMSE ( ×10−3) 4.555 4.210 7.742 7.698 5.203 7.762 7.863 SMSE ( ×10−1) 2.143 2.051 2.573 2.602 2.327 2.589 2.639 PDE Error ( ×10−5) 3.412 3.110 4.982 4.945 3.548 4.917 5.523 NFE ( ×103) 0.001 0.100 0.500 0.100 5.000 0.100 0.100 Stefan ProblemMMSE ( ×10−3) 0.231 0.220 0.248 0.249 0.238 0.252 0.245 SMSE ( ×10−3) 0.278 0.268 0.315 0.318 0.289 0.320 0.307 PDE Error ( ×10−2) 0.081 0.070 0.410 0.398 0.095 0.405 0.458 NFE ( ×103) 0.001 0.100 0.500 0.100 5.000 0.100 0.100 21 E Qualitative Results on the Darcy Forward Problem Figure 4 compares the predicted Darcy pressure fields and their corresponding data- and PDE-error maps for each baseline and for our PIDDM. DiffuionPDE, and ECI reproduce the coarse flow pattern but exhibit large point-wise errors and pronounced residual bands. In contrast, PIDDM produces the visually sharpest solution and the lowest error intensities in both maps, confirming the quantitative gains reported in the main text. ECI-sampling – Solution ECI-sampling – Data Error ECI-sampling – PDE Error DiffusionPDE – Solution DiffusionPDE – Data Error DiffusionPDE – PDE Error D-Flow – Solution D-Flow – Data Error D-Flow – PDE Error PIDDM – Solution PIDDM – Data Error PIDDM – PDE Error Figure 4: Qualitative comparison on the Darcy forward problem. Each column shows (left) the predicted solution field, (middle) point-wise data error, and (right) PDE residual error. Our PIDDM (bottom row) delivers visibly lower data and PDE errors than other baselines while maintaining sharp solution details. 22 F Additional Experiments We investigate a controlled Mixture-of-Gaussians (MoG) setting to evaluate constraint satisfaction in generative models. The target distribution is a correlated, two-component Gaussian mixture: pMoG(x) =1 2,N x; [−1,−1]⊤,Σ +1 2,N x; [+1,+1]⊤,Σ , (16) where Σ = σ2[1ρ ρ 1], σ2= 0.04,
https://arxiv.org/abs/2505.22391v1