text string | source string |
|---|---|
[2]Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. [3]Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea... | https://arxiv.org/abs/2505.22370v1 |
Neural Information Processing Systems , pages 1022–1035, 2021. [18] Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. Advances in Neural Information Processing Systems , pages 16664–16678, 2022. [19] Fried... | https://arxiv.org/abs/2505.22370v1 |
Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations , 2021. [36] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-train... | https://arxiv.org/abs/2505.22370v1 |
Tront , 2009. [51] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 1406–1415, 2019. [52] James Seale Smith, Yen-Chang Hsu, Lingyu Zhang, Ting Hua, Zsolt Kira... | https://arxiv.org/abs/2505.22370v1 |
t , (18) and the plasticity loss: E[LP t(Wt)] =−kt d⟨∆Wt,Gt⟩. (19) Proof. Stability Loss: By definition, the projected update is: ∆ˆWt=Uk tUk⊤ t∆Wt. Thus, the expected stability loss is: E[LS t] =−(t−1)E[⟨∆ˆWt,Gold t⟩] =−(t−1)E[⟨Uk tUk⊤ t∆Wt,Gold t⟩] =−(t−1)E[Tr(∆W⊤ tUk tUk⊤ tGold t)]. LetGold t=Pd i=1σiuiv⊤ ibe the SV... | https://arxiv.org/abs/2505.22370v1 |
of tasks 2–20 under different fine-tuning strategies. Method Avg. Acc (Tasks 2–20) Only head 66.08 Only head and the first task 74.75 SplitLoRA 81.47 /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000... | https://arxiv.org/abs/2505.22370v1 |
arXiv:2505.22384v1 [cs.DS] 28 May 2025Exact Algorithms and Lower Bounds for Forming Coalitions of Constrained Maximum Size Foivos Fioravantesa, Harmender Gahlawatb,c, Nikolaos Melissinosd aDepartment of Theoretical Computer Science, Faculty of Information Technology, Czech Technical University in Prague, Prague, Czech ... | https://arxiv.org/abs/2505.22384v1 |
that ofutilitarian social welfare , where the goal is to maximize the total sum of utility of all the agents. Observe that if all edge weights are positive, then the maximum utilitarian utility is achieved by putting all agents in the same coalition. But there are many practical scenarios, e.g., forming office teams to... | https://arxiv.org/abs/2505.22384v1 |
the parameterized complexity of C-Coalition Formation problem, which is a version of the Coalition Formation problem with the added constraint that each coalition should be of size at most C. We consider two distinct variants of this problem according to the possibilities for the utilities of the agents. In the unweigh... | https://arxiv.org/abs/2505.22384v1 |
the input to have a star-like structure. In the following statements, vc denotes the vertex cover number of the input graph. Theorem 1.3. TheWeighted C-Coalition Formation problem can be solved in time vcO(vc)nO(1), wherevcdenotes the vertex cover number of the input graph. The next question we consider is whether the ... | https://arxiv.org/abs/2505.22384v1 |
Preliminaries 2.1. Graph Theory We follow standard graph-theoretic notation [23]. In particular, we will use V(G) andE(G)to refer to the vertices and edges of Grespectively; if no ambiguity arises, the parenthesis referring to Gwill be dropped. Moreover, we denote by NG(v)theneighbors ofvinGand we use dG(v)to denote th... | https://arxiv.org/abs/2505.22384v1 |
size of the considered parameter is preserved. Akernelization algorithm is a polynomial-time algorithm that takes as input an in- stance (I, k)of a problem and outputs an equivalent instance (I′, k′)of the same problem such that the size of (I′, k′)is bounded by some computable function f(k). The problem is said to adm... | https://arxiv.org/abs/2505.22384v1 |
vertex integrity vi. We have that that twc ≤fvc. Moreover, tw ≤ftd≤fvi≤fvc, but twc is incomparable to tw. 3. Bounded Tree-width or Vertex Cover Number This section includes both the positive and negative results we provide for graphs of bounded tree-width or bounded vertex cover number. 3.1. FPT Algorithm parameterize... | https://arxiv.org/abs/2505.22384v1 |
p}is oftype(Col, S )tif: •Pis aC-partition of G[B↓ t], •for any i≤tw+ 1andu∈Bt, we have that Col(u) =iif and only if u∈Ci∩Bt and •S[i] =|Ci|for all i∈[tw+ 1]. For any C-partition Pof type (Col, S )t, the function Coldescribes the way that P partitions the set Bt. Also, the table Sgives us the sizes of the sets of Pthat... | https://arxiv.org/abs/2505.22384v1 |
aC-partition of G[B↓ t], we have that Pcis aC-partition of G[B↓ c]. Furthermore, let Col′:Bc→[tw+ 1]such that Col′(u) =Col(u)for all u∈BcandS′be a table where S′[i] =S[i]for all i∈[tw+ 1]\k andS′[k] =S[k]−1. Observe that Pcis of type (Col′, S′)c. SincePcis of type (Col′, S′)c, we know that we have stored a tuple (Col′,... | https://arxiv.org/abs/2505.22384v1 |
C p}ofBtof value W. We consider two cases: either u∈Cℓfor some ℓ≤tw+ 1oru∈Cℓfor some ℓ >tw+ 1. Case 1: u∈Cℓfor some ℓ≤tw+ 1.In this case, Cℓ∩Bt̸=∅. This follows from the assumption that any C-partition P={C1, . . . , C p}we consider is such that for any set Cj∈ P, ifj∈[tw+ 1]then either Cj=∅orCj∩Bt̸=∅and because {v|v∈B... | https://arxiv.org/abs/2505.22384v1 |
(Colt, St, W′,Pt)fort. We claim that Ptis of the same type as Pand that Ptis an important partition of that type. Notice that uis the only vertex of Bcsuch that Colc(u) =k. It follows that (Colt, St, W′,Pt)was created by setting: •Coltto be the restriction of Colcon the set Bt, •St[k] = 0andSt[i] =Sc[i], fori∈[tw+ 1]\ ... | https://arxiv.org/abs/2505.22384v1 |
respectively. Let (Col, S 1)c1and(Col, S 2)c2be the types of P1andP2, respectively (recall that, by construction, Col 1=Col 2=Col). The existence of P1(respectively P2) guarantees that there is a tuple (Col, S 1, W1,P′ 1) (resp. (Col, S 2, W2,P′ 2)) stored for the node c1(resp. c2). By the definition of P1and P2, we ha... | https://arxiv.org/abs/2505.22384v1 |
the exposition, we will first present the construction of our reduction and a high level idea of the proof that will follow before we proceed with our proof. We will then showthat any optimal C-partition of theconstructed graph verifiesa set ofimportant properties that will be utilized in the reduction. The constructio... | https://arxiv.org/abs/2505.22384v1 |
{we R1, we R2}and the edges we Li, we Rjfor all i, j∈[2](i.e.,Veinduces a K2,2). Hereafter, let VE=S e∈E(H)Veand by UE=S e∈E(H)V(Fe). This completes the construction of G. High-level idea. The reduction works for a carefully chosen value for C. Also, each gadget that is added has the number of its vertices carefully tw... | https://arxiv.org/abs/2505.22384v1 |
Lemma 3.2. LetP={C1, . . . , C p}be an optimal C-partition of GandFbe a vertex or edge-gadget. There exists a set C∈ Psuch that C⊇V(F). 16 Proof.Assume that this is not true and let Fbe a vertex or edge-gadget such that C∩V(F)̸=V(F)for all C∈ P. We first show that max C∈P{|C∩V2(F)|}=x≥ 2|V2(F)|/3. Assume that max C∈P{|... | https://arxiv.org/abs/2505.22384v1 |
at least 2m+ 10−(m+ 10) = m. This contradicts the optimality of P. Thus, we can assume that C∩V2(F) =V2(F). We now show that C∩V1(F) =V1(F). Assume that this is not true and let v∈V1(F) such that v /∈C. Notice that we may have up to d(v)− |V2(F)|edge in E(P)that are incident to v. If|C|<C, then moving vfrom its set to ... | https://arxiv.org/abs/2505.22384v1 |
exists a set C∈ Psuch that V(F)⊆C. 18 Proof.By Lemma 3.4 we have that there exists a C∈ Psuch that |C∩V2(F)| ≥ 3|V2(F)|/4andV1(F)⊆C. Assume that there exists a vertex u∈C\V2(F). We can assume that |C|=Cas otherwise we could include uintoCand this would result in a C-partition with a higher value (as most of the neighbo... | https://arxiv.org/abs/2505.22384v1 |
for some e=uv∈E(H), and w∈Cfor some C∈ P. IfV(Fu)∩C=∅andV(Fv)∩C=∅then V(Fe)∪ {w} ⊆C. Proof.It follows by Lemma 3.2 that there exists a C′∈ Psuch that V(Fe)⊆C′⊆ V(Fe)∪Ve. Indeed, assuming otherwise, C′would include vertices from a vertex-gadget. In this case we would have that |C′|>C, a contradiction. Assume that V(Fu)∩... | https://arxiv.org/abs/2505.22384v1 |
of the connecting components of G[V(PE)]. Thus, in order compute an upper bound of |E(PE)|, it is suffices to find an upper bound of the number of edges in G[S′∪S e∈E(H)V(Fe)], for any set S′⊆VEwhere |S′|=|S|. For any G[S′∪S e∈E(H)V(Fe)], where S′⊆VE, we define types of its connected compo- nentsbasedonthesizeoftheirin... | https://arxiv.org/abs/2505.22384v1 |
to 3. W.l.o.g. let ℓ1= 2. By the construction of G, we get that |E(G[C1])| ≤ |E(G[Fe1])|+ 3(as all edge-gadgets have the same number of edges). Also, |E(G[C1])| ≤ | E(G[Fe1])|+ 5. Now observe that |E(G[C1′])|=|E(G[Fe1])|+ 8and |E(G[C1′])|=|E(G[Fe1])|+ 1. Therefore, |E(G[C1′])|+|E(G[C2′])|>|E(G[C1])|+ |E(G[C2])|. Case 2... | https://arxiv.org/abs/2505.22384v1 |
create the sets C1, . . . , C n(n−1), one for each one of them. We claim that P={Ce|e∈E(H)} ∪ { Cv|v∈V(H)} ∪ { C1, . . . , C n(n−1)}is a C-partition of Gandv(P) =mv|V(H)|+mℓ|V(H)|2+me|E(H)|+10A|V(H)|+8|E(H)|. Notice that any of the sets C∈ {Ce|e∈E(H)} ∪ { C1, . . . , C n(n−1)}have size at most Cas they are either verte... | https://arxiv.org/abs/2505.22384v1 |
any edge e∈E(H), either Ve⊂ P VorVe⊂ P E. Let E′={e∈E(H)|Ve⊂ P E}. We claim that for any v∈V(H), we have dH−E′(v)∈L(v). Let V(Fv)⊆C, for some C∈ P,Fbe the list-gadget such that F⊆Cand|C∩VE|=xv. By Corollary 3.9 we obtain that 2αv=xvwhere avis the value represented by F, if the partition is of optimal value. Observe tha... | https://arxiv.org/abs/2505.22384v1 |
to some vertex in U). Next, we guess P′={C1, . . . , C p}(here, p≤vc), the intersection of the sets of an optimal C-partition of Gwith U; letW=v(P′). Notice that we can enumerate all vcO(vc)partitions of Uin vcO(vc)time. Next, for each P′we do the following (in nO(1)time). We create a new graph G′as follows. First, we ... | https://arxiv.org/abs/2505.22384v1 |
i-th position of c, we replace it with xi. Notice that, at the moment, xiappears at most twice for each i∈[3]. We add xi in the set Xi, for all i∈[3]. Also, we add the clauses (x1∨¬x2)∧(x2∨¬x3)∧(x3∨¬x1). Thus, in any satisfying assignment of the formula, the variables x1,x2andx3have the same assignment. Notice that in ... | https://arxiv.org/abs/2505.22384v1 |
that each assignment is represented by at least one vertex and no vertex represents more than one assignment. Notice that Ui,jcontains enough vertices to achieve this since |Ui,j|=√n. We are doing the same for all vertices in Vi,j. •We proceed by creating four copies u1, . . . , u4of each vertex u∈Ui,j. For each assign... | https://arxiv.org/abs/2505.22384v1 |
V′ u={v|N(v) =Uu∪ {v}for some v∈Vi,j}. By construction, we know that |Vu|= 30Nlognand|V′ u|= 2·3Nlogn. We now consider P. Observe that the vertices of Uuare assigned to different compo- nents of P. Thus, we have that: •at most 4·30Nlogn= 120 Nlognof the edges incident to vertices of Vuare included theE(P), and •at most... | https://arxiv.org/abs/2505.22384v1 |
For any (i, j)∈ [3]×[logn]andu∈Ui,j, ifu∈Cthen any v∈N(u)∩V(Gi,j)also belongs in C. Proof.Assume that for an (i, j)∈[3]×[logn]there exists a u∈Ui,jand a w∈(N(u)∩ V(Gi,j))such that u∈Ckandw /∈Ck. We will show that Pis not optimal. It follows from Lemma 3.11 that Uu⊆Ck. We will distinguish the following two cases: either... | https://arxiv.org/abs/2505.22384v1 |
1or0. Assume that there exist (i, j)∈[3]×[logn]andv∈Vi,jsuch that v∈Ckand|Ck∩Ui,j|= 0. By this assumption and Lemma 3.13, we can conclude that N(v)∩Ck⊆VC. Also, since each variable has at most 4appearances and vrepresents at motslogn 2variables, we have that |N(v)∩Ck| ≤ 2 logn. Letu∈Ui,jbe an arbitrary assignment verte... | https://arxiv.org/abs/2505.22384v1 |
we have that |E(P)∩E(Gi,j)|= (180√n+ 6)Nlognfor any (i, j)∈[3]×[logn]. Therefore, we have that |E(P)∩S (i,j)∈[3]×[logn]E(Gi,j)|= 3Nlog2n(180√n+ 6). Since there are no edges between V(Gi,j)andV(Gi′,j′)for(i, j)̸= (i′, j′), it remains to count the edges incident to vertices of VC. For any (i, j)∈[3]×[logn]and any c∈VC, w... | https://arxiv.org/abs/2505.22384v1 |
n. We now need to argue about the optimality of P. Using the same arguments as in Lemma 3.15, we can show that E(P)∩E(Gi,j)includes exactly 3Nlogn(180√n+ 6) edges. Thus, |E(P)∩S (i,j)∈[3]×[logn]E(Gi,j)|= 3Nlog2n(180√n+ 6). Therefore, we need to show that there are 2madditional edges in E(P)that are incident to vertices... | https://arxiv.org/abs/2505.22384v1 |
kernelization complexity exhibits a stark contrast between the two versions. This signifies that weights present a barrier from the kernelization complexity point of view. In particular, we establish that whileC-Coalition Formation parameterized by vc +Cadmits a O(vc2C)vertex ker- nel,Weighted C-Coalition Formation par... | https://arxiv.org/abs/2505.22384v1 |
we cannot apply RR anymore, |V(G)|=O(vc2C). This, along with Lemma 4.1, imply the following theorem. Theorem 1.5. C-Coalition Formation admits a kernel with O(vc2C)vertices, where vcdenotes the vertex cover number of the input graph. 35 Vi . . .ui 1 vi logn. . .vi 1 . . . Vj uj 1. . . uj lognvj 1 . . . vj lognIi ui Ij ... | https://arxiv.org/abs/2505.22384v1 |
some i, j∈[k]; notice that i̸=j. We add all edges vew, where w∈S(u)∪S(v). Finally, we add a vertex xand we add all the edges between xand the vertices of Ve∪S i∈[k]Vi. We will call this new graph G. We complete the construction by defining the weight function w:E(G)→Nas follows. •For any edge e=uivwhere v∈Ii, we set w(... | https://arxiv.org/abs/2505.22384v1 |
= 2C. Also, since any edge with weight 2Cis incident to x, we may have less than C −1such edges in E(P)\E(P′). Thus, the total weight of the edges in E(P)\E(P′)is less than C2C+ C 2 <3C2. Therefore, v(P′)> v(P), which contradicts the optimality of P. Lemma 4.4. LetP={C1. . . , C p}be an optimal C-partition of Gandx∈Cℓ... | https://arxiv.org/abs/2505.22384v1 |
(3 C2+ 2C) logn+ 2 logn 2 + k 2 (2 log n+ 1). Indeed, for any i∈[k], the sum of the weights of the edges of G[Ci]is exactly 4C3(C − logn−1) + 3 C2logn+ logn 2 . Also, by construction, the sum of the weights of the edges of G[C]is exactly k2Clogn+k logn 2 + k 2 (2 log n+1). Finally, all the other sets are singleto... | https://arxiv.org/abs/2505.22384v1 |
vertices represent edges ofHand there are no parallel edges in H. We can conclude that no two of vertices ve andv′ einCcan represent edges between vertices of the same sets. Also, since we have k 2 such vertices, for each pair (i, j)we have a vertex vethat represents an edge uvwhere u∈Viandv∈Vj. Now, consider the set ... | https://arxiv.org/abs/2505.22384v1 |
any connected component of G[V\U]that is contained in a graph of type Ti; we will denote this graph by GTi. For each i∈[ℓ], we will store a set of vectors τi j, for j∈[q], which contain all possible orderings of all possible partitions of V(GTi)intop+ksets (some of which may be empty). If GTifollowsthe vector τi j= (α1... | https://arxiv.org/abs/2505.22384v1 |
), where A={a1, . . . , a n}, be an instance of Unary Bin Packing . We construct an instance of C-Coalition Formation as follows: for each j∈[n], construct the clique Kj, which is of order s(aj). Then, for each i∈[k], add one vertex biand all the edges between biand all the vertices of the cliques Kj, for all j∈[n]. Le... | https://arxiv.org/abs/2505.22384v1 |
j∈[n], we insert the item ajin the bin Bi,i∈[k], if and only if Kj⊆Ci. It follows from Claim 5.2 that each item of Ahas been assigned to exactly one bin. Recall that for each j∈[n], the item ajhas size equal to the order of Kj(by construction). Moreover, for each j∈[n], the item ajcorresponds exactly to the clique Kj. ... | https://arxiv.org/abs/2505.22384v1 |
Kohli, Vladimir Kolmogorov, and Morteza Zadimoghad- dam. 2013. Optimal coalition structure generation in cooperative graph games. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 27. 81–87. 44 [5] Christer Bäckström, Yue Chen, Peter Jonsson, Sebastian Ordyniak, and Stefan Szei- der. 2012. The com... | https://arxiv.org/abs/2505.22384v1 |
173. Springer. https://doi.org/10.1007/978-3-662-53622-3 [24] Rodney G. Downey and Michael R. Fellows. 2013. Fundamentals of Parameterized Complexity . Springer. https://doi.org/10.1007/978-1-4471-5559-1 [25] Rodney G. Downey and Michael R. Fellows. 2013. Fundamentals of Parameterized Complexity . Springer. https://doi... | https://arxiv.org/abs/2505.22384v1 |
A Completeness Theory for Polynomial (Turing) Kernelization. Algorithmica 71, 3 (2015), 702–730. [44] Ayumi Igarashi, Kazunori Ota, Yuko Sakurai, and Makoto Yokoo. 2019. Robustness against agent failure in hedonic games. In Proceedings of the 28th International Joint Conference on Artificial Intelligence . 364–370. [45... | https://arxiv.org/abs/2505.22384v1 |
arXiv:2505.22387v1 [cs.CV] 28 May 2025DAM: Domain-Aware Module for Multi-Domain Dataset Condensation Jaehyun Choi Gyojin Han Dong-Jae Lee Sunghyun Baek Junmo Kim Korea Advanced Institute of Science and Technology (KAIST) {chlwogus, hangj0820, jhtwosun, baeksh, junmo.kim}@kaist.ac.kr Abstract Dataset Condensation (DC) h... | https://arxiv.org/abs/2505.22387v1 |
of single- and multi-domain training for existing dataset condensation meth- ods (DC, DM, MTT) on the PACS dataset under a 10 images per class setting. In the single-domain setup, models are trained using only Cartoon do- main images, assuming access to explicit domain labels. In contrast, the multi-domain dataset set-... | https://arxiv.org/abs/2505.22387v1 |
synthetic dataset while still maintaining the essential data characteristics of the original dataset for training deep learning models. This approach drastically reduces training time and computational costs, allowing models trained on the condensed dataset to achieve performance comparable to those trained on the orig... | https://arxiv.org/abs/2505.22387v1 |
often collected across various domains with distinct distributions. These domain shifts can significantly impact a model’s generalization performance; without addressing these differences, models may only be effective in limited, specific environments. Two prominent approaches to mitigate this issue are domain adaptati... | https://arxiv.org/abs/2505.22387v1 |
domain differences are mostly distinguishable. However, for unconstrained web data or large mixed datasets, explicit domain labels are mostly unavailable, primarily because the goal of the dataset is not for classifying the domains but also because the distinction for each domain is vague or overlapping. We first defin... | https://arxiv.org/abs/2505.22387v1 |
four distinct domains and are leveraged not only to validate the effectiveness of DAM in multi-domain setting but also to better analyze the differences between single- and multi-domain dataset settings. We note that the provided domain labels are not leveraged unless explicitly stated in the experiment setting. 4.2 Im... | https://arxiv.org/abs/2505.22387v1 |
supplementary material C. Dis set to 4 for all experiments. Finally, we follow DM [ 3] for the evaluation protocol for all the experiments, and the results presented in the tables are the average of 10 evaluation results. 4.3 Results Main results Table 1 demonstrates the performance on the single-domain setting with th... | https://arxiv.org/abs/2505.22387v1 |
on the same setting and show the results in Table 4. For a single-domain dataset setting, we isolate the target domain with an explicit domain label for condensing and evaluating. On the other hand, for a multi-domain dataset setting, the whole PACS dataset (i.e., all four domains) is utilized for the training. The eva... | https://arxiv.org/abs/2505.22387v1 |
experiment, we tested on the three domain generalization benchmarks and compared DM with DM+DAM under 1 IPC using explicit domain labels only to isolate the target domain, which is only used during evaluation and neglected during the condensation process. As can be seen from Table 5, employing DAM with DM performed bet... | https://arxiv.org/abs/2505.22387v1 |
achieve better performance than basline methods (i.e. w/o DAM). 6 Conclusion In this work, we introduce Multi-Domain Dataset Condensation (MDDC), the first framework to explicitly tackle dataset condensation under multi-domain settings. To tackle this setting, we propose the Domain-Aware Module (DAM), a plug-and-play c... | https://arxiv.org/abs/2505.22387v1 |
on Learning Representations , 2024. [Online]. Available: https://openreview.net/forum?id=rTBL8OhdhH [12] G. Cazenavette, T. Wang, A. Torralba, A. A. Efros, and J.-Y . Zhu, “Generalizing dataset distillation via deep generative prior,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ... | https://arxiv.org/abs/2505.22387v1 |
International Conference on Computer Vision , 2013, pp. 1657–1664. [27] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsu- pervised domain adaptation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2017, pp. 5018–5027. [28] S. Gidaris a... | https://arxiv.org/abs/2505.22387v1 |
additional experiments where synthetic data is initialized with real images, selecting a random image from the corresponding class in the real dataset, following prior works. These results, shown in Table A, represent averages over 10 runs, consistent with our main evaluation protocol. While all methods benefit from re... | https://arxiv.org/abs/2505.22387v1 |
We study the effect of varying the domain embedding weight λ, which balances the class loss and domain-aware loss in DAM. A smaller λreduces the influence of domain-specific learning, while a larger value encourages the model to attend more strongly to domain variations during condensation. As shown in Table C, perform... | https://arxiv.org/abs/2505.22387v1 |
domain masks, while higher values (e.g., 1 or 5) blend domain cues more evenly. As demonstrated in Table D, experiments with τ= 1andτ= 5show similar results, whereas the more discriminative setting τ= 0.1yields a vivid improvement in most configurations. As with the 14 previous sweep, we omit MTT + DAM results for IPC ... | https://arxiv.org/abs/2505.22387v1 |
Gaussian noise initialization. Table H lists the hyperparameters used for MTT with DAM across all datasets. We initially started with the settings reported in the original MTT paper [ 4], and conducted minimal adjustments only when instability (e.g., NaNgradients or diverging loss) was observed. We constrained the sear... | https://arxiv.org/abs/2505.22387v1 |
results based on DC and DM baselines, and visualize the synthetic images and domain mask after the final condensation step. 16 Figure 5: Visualization of the final output and domain masks in CIFAR-10 under 10 IPC setting. The shown images are condensed with DC+DAM. 17 Figure 6: Visualization of the final output and dom... | https://arxiv.org/abs/2505.22387v1 |
arXiv:2505.22389v1 [cs.LG] 28 May 2025Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning Haomiao Qiu1,2, Miao Zhang1∗, Ziyue Qiao2, Weili Guan1, Min Zhang1, Liqiang Nie1 1Harbin Institute of Technology (Shenzhen) 2Great Bay University 24B951058@stu.hit.edu.cn, zhangmiao@hit.edu.c... | https://arxiv.org/abs/2505.22389v1 |
only current-task data is accessible and previous task parameters are reused for initialization; whereas model merging assumes that tasks are trained independently from the same pretrained model, without any data sharing. Both paradigms operate under task-isolated settings but differ in how and when task integration oc... | https://arxiv.org/abs/2505.22389v1 |
With the rise of large-scale pre-trained models [ 29,30,31], continual fine-tuning has become increasingly popular. However, full fine-tuning is computationally expensive, and more parameter- efficient tuning strategies—such as prompt-based learning [ 32,33,34], LoRA, and modular tuning 2 methods [ 35]—have been propos... | https://arxiv.org/abs/2505.22389v1 |
be rewritten as: ˆθt=ˆθt−1+αt∆θ∗ t, (4) which means that taking the weighted average of ˆθt−1andθ∗ tis equivalent to scaling ∆θ∗ t. This scaling strategy means that it will not harm the parameters ˆθt−1for tasks 1 to t−1, but only by adjusting the task vector for task tto reduce forgetting of old tasks. In our experime... | https://arxiv.org/abs/2505.22389v1 |
matrix ˆFi(θ∗ i)is computed and stored at the time of training task i. 4 3.3 Train with Perturbation Further, we can also reducePt i=1δi(α∗ t)to make the merged model optimal on all ttasks by optimizing θ∗ t. Next, we abbreviate Hi(θ∗ i)asHi, then we have tX i=1δi(α∗ t) =1 2tX i=1 ˆθt−1−θ∗ i⊤ Hi ˆθt−1−θ∗ i −Pt i=1... | https://arxiv.org/abs/2505.22389v1 |
=p0Lce t(θ) +p+Lce t(θ+ϵ∆θ∗ t) +p−Lce t(θ−ϵ∆θ∗ t) =Lt(θ). (16) This sampling strategy reduces the forward cost per batch from 3 ×to 1×, without introducing bias into the gradient estimation. While variance may increase slightly, this technique enables us to scale P&M to large models and datasets efficiently. 5 Algorith... | https://arxiv.org/abs/2505.22389v1 |
73.81 ±0.39 65.23±0.45 71.30 ±0.16 CODA-Prompt [32] 76.63±0.27 80.30 ±0.28 74.05±0.41 78.14 ±0.39 69.38±0.33 73.95 ±0.63 HiDe-Prompt [62] 74.77±0.25 78.15 ±0.24 74.65±0.14 78.46 ±0.18 73.59±0.19 77.93 ±0.19 InfLoRA [23] 76.95±0.23 81.81 ±0.14 74.75±0.64 80.67 ±0.55 69.89±0.56 76.68 ±0.57 SD-LoRA [21] 79.15±0.20 83.01 ±... | https://arxiv.org/abs/2505.22389v1 |
substantial improvements over LoRA—up to +13.6% on INR-20 and +11.1% on C100-10—highlighting the benefits of post-training merging in preserving task knowledge. While methods such as DARE and CoFIMA offer moderate improvements over LoRA, they consistently underperform compared to P&M. Notably, P&M exceeds CoFIMA by +3.... | https://arxiv.org/abs/2505.22389v1 |
perturbation enlarges the flatness and width of the low-loss basin around the merged model, making it more likely for the model to fall near an optimal region. This suggests that parameter perturbation helps avoid sharp minima and reduces parameter interference during model merging, thereby enhancing the generalization... | https://arxiv.org/abs/2505.22389v1 |
the diagonal empirical Fisher Information Matrix. However, this diagonal approximation may not fully capture the true curvature of the loss landscape, and thus does not always guarantee optimality. Exploring more accurate yet efficient curvature approximations, is a direction for future work. 9 References [1]German I P... | https://arxiv.org/abs/2505.22389v1 |
Thomas Möllenhoff, Edoardo Maria Ponti, Iryna Gurevych, and Moham- mad Emtiyaz Khan. Model merging by uncertainty-based gradient matching. In ICLR , 2024. 10 [19] Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, and Li Shen. Deep model fusion: A survey. arXiv preprint arXiv:2309.15698 , 2023. [20] Enneng Yang, Li ... | https://arxiv.org/abs/2505.22389v1 |
Muhammad Ferjad Naeem, Luc Van Gool, Didier Stricker, Federico Tombari, and Muhammad Zeshan Afzal. Introducing language guidance in prompt- based continual learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 11463–11473, 2023. 11 [35] Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, ... | https://arxiv.org/abs/2505.22389v1 |
subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152 , 2023. [52] Matteo Boschini, Lorenzo Bonicelli, Angelo Porrello, Giovanni Bellitto, Matteo Pennisi, Si- mone Palazzo, Concetto Spampinato, and Simone Calderara. Transfer without forgetting. In Proceedings of the European Conferenc... | https://arxiv.org/abs/2505.22389v1 |
task i. The matrix Hi(θ∗ i)denotes the Hessian of the loss function for task ievaluated at its optimum θ∗ i. Substituting ˆθt=ˆθt−1+αt∆θ∗ t,into the objective, we obtain: tX i=11 2 ˆθt−1+αt∆θ∗ t−θ∗ i⊤ Hi(θ∗ i) ˆθt−1+αt∆θ∗ t−θ∗ i . (19) Expanding the quadratic form inside the summation yields: tX i=11 2h (ˆθt−1−θ∗ i... | https://arxiv.org/abs/2505.22389v1 |
with 60,000images equally distributed across 100classes. CUB200 is a fine-grained bird classification dataset containing 11,788images over 200categories. More results on DomainNet, ImageNet-A, CIFAR100, and CUB200. As in shown in Tab. 5 and 6 Across all four benchmarks, LoRA-P&M consistently outperforms both standard L... | https://arxiv.org/abs/2505.22389v1 |
arXiv:2505.22391v1 [cs.LG] 28 May 2025Physics-Informed Distillation of Diffusion Models for PDE-Constrained Generation Yi Zhang Institute of Data Science The University of Hong Kong yizhang101@connect.hku.hkDifan Zou Institute of Data Science & School of Computing and Data Science The University of Hong Kong dzou@cs.hk... | https://arxiv.org/abs/2505.22391v1 |
a noise-image data paired dataset for distillation. (c)Our proposed framework distills the teacher diffusion model and directly enforces physical constraints on the final generated sample x0, avoiding Jensen’s Gap . learn the joint distribution over solution and coefficient fields x0= (u,a)from observed data, where ade... | https://arxiv.org/abs/2505.22391v1 |
models [ 58,21,28] learn a score function, ∇logp(xt), to reverse a predefined diffusion process, typically of the form xt=x0+σtε. A key characteristic of diffusion models is that sampling requires iteratively reversing this process over a sequence of timesteps. This iterative nature presents a challenge for controlled ... | https://arxiv.org/abs/2505.22391v1 |
(b) (c) (d) Figure 2: Empirical illustration of the Jensen’s Gap in physics-constrained diffusion models. (a) Absolute velocity error and angular discrepancy ( 1−cos(θ)) between Diffusion Posterior Sampling (DPS) and the ground-truth conditional ODE velocity on the MoG dataset. (b)and(c)Histograms comparing the first (... | https://arxiv.org/abs/2505.22391v1 |
these constraints in diffusion- generated data, and the Jensen’s Gap [15, 3, 23] it introduces. 4 3.1 Preliminaries on Physics constraints Physics constraints are typically expressed as a partial differential equation (PDE) Fdefined over a solution domain Ω⊂Rd, together with a boundary condition operator Bdefined on th... | https://arxiv.org/abs/2505.22391v1 |
heuristically down-weight PDE constraints at early denoising steps (large t) in training and sampling, respectively, where Jensen’s Gap is pronounced, and emphasize them neart→0, where the posterior mean approximation improves. ECI-sampling introduces stochastic resampling steps [ 61] to project theoretically inconsist... | https://arxiv.org/abs/2505.22391v1 |
diffusion training. To evaluate generative performance, we track the diffusion loss, which theoretically serves as an evidence lower bound (ELBO) [ 21,31,12,50,13]. The comparison results are shown in Fig. 2d, revealing a significant increase in diffusion loss when the PDE residual loss is incorporated. This suggests t... | https://arxiv.org/abs/2505.22391v1 |
objectives: (1) learning to map a noise sample to the final generated output predicted by the teacher model, and (2) enforcing physical consistency on this output via PDE residual minimization. Concretely, we begin by sampling a noise input ε∼ N(0,I)and generate a target sample x0using the pre-trained teacher model via... | https://arxiv.org/abs/2505.22391v1 |
coefficient fields—from a learned distribution that satisfies the underlying PDE system. Our student model inherently supports this task through efficient one-step generation: given a latent variable ϵ∼ N (0,I), the model directly outputs a sample x0=dθ′(ϵ)that approximates a valid solution–coefficient pair. Beyond thi... | https://arxiv.org/abs/2505.22391v1 |
work [ 8,29]: MMSE measures the mean squared error of the sample mean; SMSE evaluates the error of the sample standard deviation, reflecting the quality of distribution modeling. PDE Error quantifies the violation of physical constraints using the physics residual error |R(x)|2. The number of function evaluations (NFE)... | https://arxiv.org/abs/2505.22391v1 |
NFE ( ×103) 0.080 0.500 0.100 5.000 0.100 InverseMSE ( ×10−1) 0.236 0.545 0.456 0.428 0.468 PDE Error ( ×10−4)0.126 1.505 1.402 0.438 1.113 NFE ( ×103) 0.080 0.500 0.100 5.000 0.100 ReconstructCoef MSE ( ×10−1)0.128 0.395 0.240 0.158 0.179 Sol MSE ( ×10−1) 0.102 0.219 0.143 0.125 0.147 PDE Error ( ×10−4)0.143 1.205 1.2... | https://arxiv.org/abs/2505.22391v1 |
0.278 0.127 0.255 0.705 0.398 0.372 PDE Error ( ×10−4) 0.145 0.129 0.098 0.134 0.354 0.154 0.157 Inverse MSE ( ×10−1) 0.236 0.195 0.136 0.188 0.503 0.284 0.271 PDE Error ( ×10−4) 0.115 0.126 0.079 0.121 0.321 0.143 0.139 ReconstructCoef MSE ( ×10−1) 0.128 0.107 0.913 0.954 0.294 0.133 0.138 Sol MSE ( ×10−1) 0.102 0.084... | https://arxiv.org/abs/2505.22391v1 |
Eric Gu. Tract: Denoising diffusion models with transitive closure time-distillation. arXiv preprint arXiv:2303.04248 , 2023. [6]Changgu Chen, Libing Yang, Xiaoyan Yang, Lianggangxu Chen, Gaoqi He, Changbo Wang, and Yang Li. Find: Fine-tuning initial noise distribution with policy optimization for diffusion models. In ... | https://arxiv.org/abs/2505.22391v1 |
Bergman, and Adrienne S. Lavine. Funda- mentals of Heat and Mass Transfer . John Wiley & Sons, 7 edition, 2011. [26] John David Jackson. Classical Electrodynamics . John Wiley & Sons, 3 edition, 1998. [27] Christian Jacobsen, Yilin Zhuang, and Karthik Duraisamy. Cocogen: Physically-consistent and conditioned score-base... | https://arxiv.org/abs/2505.22391v1 |
equations through gradient–log–density estimation. Entropy , 22(8):802, 2020. [46] S Chandra Mouli, Danielle C Maddix, Shima Alizadeh, Gaurav Gupta, Andrew Stuart, Michael W Mahoney, and Yuyang Wang. Using uncertainty quantification to characterize and improve out-of-domain learning for pdes. arXiv preprint arXiv:2403.... | https://arxiv.org/abs/2505.22391v1 |
2024. [65] Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, and Anima Anandkumar. Fast sampling of diffusion models via operator learning. In International conference on machine learning , pages 42390–42402. PMLR, 2023. [66] Zhenyu Zhou, Defang Chen, Can Wang, and Chun Chen. Fast ode-based sampling for d... | https://arxiv.org/abs/2505.22391v1 |
forward from Tto0. For all reverse-time SDE, there exists corresponding deterministic processes which share the same density evolution, i.e., {pt(xt)}T t=0 [58]. In specific, this deterministic process follows an ODE: dxt= [f(xt)−1 2g(t)2∇xtlogpt(xt)]dt, where tflows backwards from Tto0. The deterministic process defin... | https://arxiv.org/abs/2505.22391v1 |
diffusion coefficient and φcontrols the phase of the sinusoidal initial condition. The exact solution is: uexact(x, t) =e−αtsin(x+φ). In our experiments, we set α= 3and take a:=φ∼ U[0, π]as the coefficient to jointly model with u. B.6 Navier–Stokes Equation We adopt the 2D Navier–Stokes (NS) setup from ECI-Sampling [ 8... | https://arxiv.org/abs/2505.22391v1 |
observation loss, i.e., ∥sθ′(ε)−x∥2. 19 The physics constraint weight λtrainis set to 10 for Darcy Flow, Burgers’ Equation, Stokes Flow, Heat Equation, Navier–Stokes, Porous Medium Equation, and Stefan Problem. For Helmholtz and Poisson equations, we increase λtrainto106due to the stiffness of these PDEs. All experimen... | https://arxiv.org/abs/2505.22391v1 |
shown in Table. 4. PIDDM marginally surpass all baseline especially in the physics residual error. Table 4: Generative metrics on various constrained PDEs. The PDE error means the MSE of the evaluated physics residual error. The best results are in bold . Dataset Metric PIDDM-1 PIDDM-ref ECI DiffusionPDE D-Flow PIDM FM... | https://arxiv.org/abs/2505.22391v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.